var/home/core/zuul-output/0000755000175000017500000000000015140050410014513 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140053147015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000214522515140053000020245 0ustar corecoreVikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB…t}x6b}Wߟ/nm͊wqɻlOxN_~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR׏!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX /m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL +3[n )ܗKj/jUSsȕD $([LH%xa1yrO('h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44nc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\4BG.k7%,ڃo24 ~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$s?{WƱPz;| \;_D[T/BI GH8@"t*"9E[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэoV#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ 4cJ{F_} %Kq1&NF3FlJ| ٞ_HJU1;*VWWU/(~'xGIXE,'[jdj om$ObF"4d,_P "a\(ϳ<.]iC3-1TC)xB,׃OhaQܯ7sŭ^~ϣJi6idSp ! HhS<2w; 2VP*ȯn$Hu5cuɕ,ae$6A~vsl?㫟GW%ys>9r$ GY1٢Biȯ+j ȝD\U B_h~e{]RvIي]7 fyM.M@&lWuȵrubŒ8*'v6]_uScXRG#"E\ud/ƅ.^[nB`j;8/x,E;xJp<K&~$2gY#hb0Vs~Ńit }Ll(kє7^pa&b09<<X7q1X`X*tQ9磶c׆ zE2M92+Nn(bR⢘O %oIp+_=U4Q$(N7;kbCKe(a׿a QpIg0s# V!rsGI B,AM`lUe%DIϧ(>PH~EE_ ;lVGV};fˑH?|F"ZϩHcB!7̞a`B"[Wq0=" z鸹Siʃ*N{,fW@ד45f<ɻXC0>|6f"8eaeQ)q3aZZ7_3D}}(4JU+ЈLኇgG 砆a 1dy! ݇'%&N6aš!R$*$J[߱z%q#̷b/N OmFіc~,)s\>i(\bLӞ9a&}Ӵ9[,jS$2dz48zmg"\EaA"B-8'KV  أ"= GݝD}EMSsAZq ! (oq^VL!OT'",<д!d*;RLS JߚmD}i4;B;c2Ò`'X 6-Lȩ a<=_x*}) PJ"*2 W]_ӶQMI,YAhjvYSՍY$lGގ!.d$͝;AwCJ qH 7 w3|7BTGphUeIr,lI ċ<$l[e8N[]4l]=D A2YnlS8cJ s~;dcVjK ױ%%^uɚd}h=Wߕ@/r (x6W9۹H piFW8piA+KE7%F,j_3I)>+sWu=}e-`O*ukcd㍻KWWؼGZhau)d[`m4It\]iaڳ 9f(3Ox+XJ\psL)>m=W*LY-)U<& o%TUBV\x(IYRiȲa\{ +=9rɯeX#.)G0\ Qm/c3ۊpq7kU-u Y]p)"$)}wyJ8ETyBb%[ \- e8kKLṠjӈP&!<ɛq@V4\oZX2t!D~}o metT 6NEe^J!<}XF{m5h}|E.>sdxǾ`{|><y : ZQw]F$Zp,d% -5x!J 4@n\+ :ZG|um[+CSڣE?d m|bt~kyuyQÍB5ô"Ot#fkfF8,tU;h!rqyiS 8Op&G[f990t)Z['~ÐA*µ> Ôc78ڊ%~E?&q"z)Z4HG vqH.rbȶ?`*iqmhWNoiCΔJYԹ%d#xMɖKVӬ*cȔ.`MDO.zY@4P^gh>;Du%FzxtYp8}Svұmv;oɂJE^49VW]}4Xݪ,ш[zCK_<"[eyc6Ϊ$pgɢGsoU35n B`FAvތÃz=U]FF cEq)p*:X |z0 ō&3A KrJv2A{dNR'TƚnM"A i \x1h˪Xұ d(yg/O$sTˇY|2qռlw뢈&B ^zvϤ[Vilsua?.3@ml_Mg*Y~I9,9Mw(p&qU7'N`@bDeBV=ڤh!T:)`rmUyс?Oѹѫ~~To[`|$"(8a!Aq0ߞp0և@uzbOWp VaV|wX %W ȩ(%P((!Enl ʘ@g)TΏ=f A)d HЧ<)d}ГXABbyblV "?6*@nu Q 0 ;}+,A"D+@ET74.TTAq=` IBmT B;t(,>$ GèqK+ D"0D-Ea$y2B*$s%+(׋ ) ZF}7D_2)qDv޶@XRT Ծ ܟ + J]F+0a؃ $fdv0%+(MN ˿tgEׯlU8 ]+ o5QX l]ޓ{┸]^m(տ\FnY@M̃ȹANao(%]¹$WxgtIbXSoU,^a:}zx)|:;~@nVa_ׅo֝[O&7>KTtksO9guiS #8罉B?GUhn!=z0ݦ a 5Ln>hIMGQeBb18Bpާ6a!n0{Jm"Wnۏ0c\mW } .1^Auc,v'x=ݦ08@ &BӌlY]ޡRNy!e'\Pޤq}H Bބχ uh]z.A iᮺyȋW\_D;. =$"e $H1{QAǭ:G0].K`gjp}{mGfYK%>#7VBR-52Y<~`ZЀP\K ?gd%ŔXNѰ v87ۉ丝(Rlj ஹ 'RzT:9퐛JŸ:ƩLN rdsH O@3H g}e6e6#ETP+,N'ou|a6Xǥ!RY6 ;™$^u'<-L!UE<\ @m "v?KAnl&*&Y=h-JI0XhZW9fW1Iɫ,lV֣&kPJRȂT K U0V$ nC)6-GuX̫h8`vc,T<_TC}"> IϵX_7_}"DɜbP6sd-jKݧRLKO}߬1fkE{tvLR{ݧw8w0siw<Ԝ\>Tr-2'jH'YvJ! V]IX-j`MiJ) ^cll|ptH2iLH=gto9=\ 5 4:E}^P_EQ2g1hT "#8H'ZD --X7`ܦ»oPZVlaٵ^l=D ([M^m4=!xwC&ON?Ң<~.A~HO^WZ^VQLg\/e:fpdNiQޢDzr<WsZeoԽ6e՟)Dşrh )K;(zВj ,bN*;H8ng# *,!R  Ƕqb{qq ;M#ʆy” a@G+t㭺 "H8!BTZWg! arަ!~@mrN@[4rOh|ϥl`xi*Ɋ2%j;^ { yrK8Db 63$&ܨ/8EnNd^2r| [^7a0, ު.l.VP5jJa`j,܈&Q6j{!_Y;Qyj;k"(=P{qhGI FSq=cOk 87 eUq]ٵՏ`G#w'pvھn躓WpLO /`[[.:PKvM": aʥ:ULe?X0K(@kzE**MIg,cᷦPgPgB uFHB]Rg1u1Gy:;4ڧweԲeʞF*{$lBuBuw'}# uw ['ہPowB=PoBuBw'# w w N( v 4؝i$4؁PN؁P;iG*v 4\'4܁pwBç>BWtc̔0 ?ϊEPn!ό&<l.f$SiՃu^%zNGCnU\@ީ*RiLkGeV"q)mDH?lC= |:`l^x/_pӯq>wUS xA^!22K>[~Gws O̡oSAbPObѭgpOLF5wx~\pБ6Nb,! yPß*Kic{ʛ!h<0r\9z1j !ZI5r0%:"/ުt~di9HaЇ{NSti+KdCav!5 te7L5kÃ3WT ԡule1quķ*HFdUaO@vtGM ;bT~$=)m澄|SNH'C>:? xwn+g[r~A[suu#w}3?ʕ<>sרnY~i޲φ>ơW0B]m]g2Gaejb6fT@t`%enR \gޠ8bp/8 D꣍וjBOg=k0:=s8wj|G-(URH/GӬf ɤZI$Sl ]P+RZеL_[Ϸ2f=}7EV5̀ic`_ڠA!Rl^!r]–Z$mC+30A#VeIit (^wx*r#8!{fv)洙NHƨƉt8`*o{ yC]R3H]ytL:A;݆Kͥءm[̓7~n1fu(zn~gX+Xt p_I<~˃ZK I6KeEj|^Y Q2Z6pMXXƤuSg|')uD2ϳ}`-gh[O-Rҕi6T_RK؞Q YhHrsbϗ#hzѸNT=R!Ta/M W1$_@zz6E÷ 8ڍvsUS2U9={ynDopls#Nj\:遲jnWlY/p_T2})#b2iE|YU |" 5`CZj \! )p z)DG>լwa Zr@M@e0pOR/\ݬ F{}> 8H(mEQLNXMk-8lPW EKnͮW1J: 4'ݓ碝<ŸQT%kB֗˙.]rk6G'Τ4+ LXMzdiEZ1B麵5HLr21A]C}3~ vwauFopzuh?j 9zfS_s3_~̹s}9=YZwIވ,* H{^1'cZ7Vx<_x=gKyoF7s4+j}IC齟J /vN#Yaν,<  򇰹oԄ[4 -UUnii fPċ2-ØaO й6J9TZLNp,H#k)Smih"V@}yء8j"VWXi 2~ Ǘ-EU?9g`H1\lZ6z+OFcK C6jⅦH[Ң g(Gf'%HJAD 5lT>LÑ"LJfhPS,f!1 \ZL9QqHxpy8*♃fΣ:K .TS2_U[xR`o Bo>`&ϣpjIARAth"b[kpDR$AE6uNu@ХSa(!B-<'M Aŕ^}y.8WgHP՜s $YZt}PD .p\Oy4gCFXl1\MPc Us)\f{S$cXg;1Y1]QAUTbA$*gYhE)1>69=#w@tE)끝ucՖ(-hP#Z xiו" t|mHN,+WA׌z) E&XC8}1N|b4bza:gTg/1 gfl).x_zh]u[`1 ygjG[96bH`%֣/ŁV F-g( ҄]tzV.ާ-"ǣaA߬23ʣV).4.8aLF)Pl"'5GH&kbKt;G![de0=}KE#RR"˖j .p|Mꂣ w']wmV /$JW x-P 84Hn8`R`b(b6ݲG"I1:U9u>S,@b8SrȬN"E<4%52Nj3 =O `P >} /G"]X+&*7bIڶƩjI!H0)?/ Ul:ؓmvwArp+V.Ldr`xbok֪+GIP2]/!^[_Z#8fƼA V&V).;}}:Hf!C C+Ĕ*+Cq ?=:DUnVՓC+;Oz`̬jH*U|hC;ϫX 4o>.eF~:ƣCq<. xІ׹{{A%o@E! `_A奈>$Xu [KhiuARk3à U1:|D=H)I%s!)YYd!K \"R<jJ 'pnp.ۗcxb+l,MIlps.ܚZ@[ rѫʹR".V~ܗ]|Vә#U.=q^09 =q\{{pcx%TV 1H <)XZD:P\yEx2"pI߽EPy6R Ț0lgs#۴[BX.7wMъR8[U]p\[̀PX+>.wCy%`vV+0`PEeF{4ԮoK ҒK%8q9AQNJkK8őw$"@[.ۈFH{.ؚ]:(.&>&2,ߐ O膷 #yf'a3Zӝh:]$̷t*b^tg3F|8[ZJUѶĻ[fVN+pI8nC]Ǽ\;?"wٯjgZi xM'k;r~^kLjf}}s>uq0Uʵa46%{cM??8*epD;<@G]=pn^ G=~c92tR8C\,UC(En)A E <vͪ%#}t?w޿eF1/v~\.RW㳢I?HbJUC<2>ZQJD7--rc҆U1ELy^{6"^ #Bzy(R x9߹SXڊz7DCRDΥ\$^uէhجvq3kE\b FamIItq4큣bV-/;c)ѕG(ýus-qxi֍3iVlp{; QTfV>e k[8?/Vf_[\z n_k36gSEZnz8Kځ3bup%BW.i p:($|\&u$|Ac8ѓSCZĤD" Vwէ . þvɞV'{R}Qx*3SAeڱŞPH`"&)FfCW2xz>_PAӎ E܊u_Lvtx v8tHệuBgV^r!*Z * ̸cDJ ? ^ eo3惧nj /arbX\.awцbp b릷R]\l|6A_з?8 sh}6V6ђ]:?7jCixx%VB1)"h0tD@ӻVxVˣl+|mZ.^\;)u$ν.ޢ2bV,uH΀JP9v9ϫXʣU#Zum,+WA׌OH۹*c, 1EtI74gz5)NkMJuqtmyr ژ7Mu6*;^RzlMiԫ";a) q-\Lл`d0}שbmG! |R6hƈjE1t~G=x\3àfx;X-U"rc{~G7 a5nw_z號Ns֧?%ϊ I|_$j<M(B?.{?~Se$&CDļ:]?ou 9.8:a3\[:9Gt6>]vvHcSѼ1cۖ3*ٯo<5?+^jWMtc"~那T$]bp&w1!nS7{o}X/\elzqѳs.~-#>n#Ք ;Ҡőyƌ 'Ms-˥Zc5D댉W5 POŚ)ޣ9!-֍ڌ2ƷNҪdfu")E~z\v <*DrJ,&Ssߖyyꂣ"?hS/jT}ʝ9"3e%-bػmlmnwt:"H:vֲg.CIe'ۑR&-~v8J'7Űf]V^Bȷ,;7eab5SvHP,ҊI`X{ l7Z!Wgq{8^'w9Ǝw^B*HnB!ݩ4;+]ﵹ䶺GtyF+M2oss4v97خ._75$nC ^NŸb+8>~3k0 0/kVh6qQƓ*ڪ^5AL"+34eߒX'?I(:MFxsKLX+C^LE/zGb~hҁ+6\3{y|ė)#8Ec~C?\v|I {+o9o!n_,t OLǾw'M%^b.qDToN?vHa#"fqYmf׎@/_~7' ?x g0 W>Q3eєRf}>ehAzi"OSy1MFv48;H`%$[>4%Xֵp5nN>gˤ t(2P:UK](Fk^Zk:{v}k.^*ٟY{[2"KoYoo&ɥg0oݫ7M?.<aOi„5^ŗ%l0ojT46`?WZf<]!PG;o P~ p |>|7w9\脁x//@x!?d|X U XϿdW#ؐtl/ zA?I&:#r!EhwDO^zS2c84q%oѸR5^>Jm߃n{4sޝ,7 ^OJԮzovoC©_VXkmX`WʘT Օ8?(bkGO6N")lzdI/)} /{k2 rH/u p#ok]{ vt.*}0NP =qVਜTYr;XLJ<Tu5q3Cf43K;3nŭ-2iHP9[qҫp$Qf$948n`0&}Ix)%O&o8i) w% Jk']l%ldKPH.PH^۞T!YAiŠvJ]d AӨq#8JH hAqcHui5ٝpAvKp}k&LwG0¸)v2&K:3%Slms@e⇨ [7wI;85'  +٩n(VZ  $$-gVh0>c4',& RfG?iG_ˣ@X]H w1r4c7J,r>q!t a~`Iѹ]]rxӘgݻjSt Q3*j8Eu^\cxK83-Q>9YIGS'J`2V SfR,Mg1w'eeŇ5`l lwx #=+s;`0fpf Pn{іl 7[جBߕ^#D6=KTXg!g)E{1|-bĖP;d3udP+m ~_ fFHl{,Rq⹠`V\ <՛bI7Ů8LqZ>sU;2ztB SgqKH *dp,#Ky6YVm:l=q8S^hoSX}0+r"K'"[i/x~omYjms`4w8hɌYO6HlJ~VF$n;-u淶)3g4} ?kMK筢Bve8J6>6(FnWi@4+ΤfPR.`.~'bSWSaSxK-J _O;w8/~BlXf6bs.99:+4>@ SgҀu@xfU Ǩ!^3~yp? iEaiXn .A}RYzWiSk\ӵJ^)l4[ {e8u͎dNj$qTQ@03P&XL% aenZQ#nZ5Q-h]uXmu}0Z]ס9}Lq` 4S %3xFab j4∂``5?]38;֌ec"u^PRh@^2P;^! +lѴFѴkXH35C-m R eCR8FzcA1k$ @\|;1z:zOGLc=$Sx35rPɪJ"t.ofɳ$P r@Xa}DA/o-;oL| ?Kx B&ބ+tåjK+m8]'_oTAj*&otm/:p[C9WNqir7?Z;ղhyƗMH 9Gs8*h6jf\׿~<ˆWu# )pC [%*6 y@)4g2i _*/yp1:ʆ#P'Kc>~ǝK]}0W凇1\=gdW#^<bzGU^3'Ev|#Wz0>vhko`ޝ`0GCUt\;J^ gh*B A՜ > d x'R~Ja%Pd@`MMDqw:hJ&JKU+dÄg=%Wl} .֛`6T0MF.E)%Ql\M:W%3-[/1I]IR B+xWXH,*8`j=eU5MjQK}Sk $AS')S},==:{C84xTz`Ucdukbw`-yK%㠨Ộ[7w8XFBQ=QX<ǒHu3=gҦH6w\ynkXnpԊt@LU?k 4E`khxA3%m /M7FJRp3# Z|""(xFBl k%p\?0٥yעq7p- K0;#16BKaӟTbOUq%)MAS5"Q!4!)48+H aQfO1NTGSJc#gĀ1muF5BJO") )Jx;@b[]U]U]*b*0) )`Jl1PqۉBaGB!GBI;ړۏL.İҔ”"hG %d,)q"5X'-LڕrT%c L;= ̥@8p0$Z0"AD)1"&CVNw"k `ٰ\*J~횫iJ~LUiaX;% mhM;WiPO\źX+䖩9f{q<+-g^Ig '˛!FTsF'vz\k#b ՚LEs9՚ eNB|Әbp]N{IBDjzv6`2JYj;e߻j;pMϾMh\;ƨKIDt̹r*%qJRKd(JRfTPX&"ձxO+Y. l63)u0oѳB외&[N{Xo`<93Y DP *fP;,Vj@;d@;yvJ:0n0dP`paqT b=sk-Qc'we`9( uo:5ǚ3 ; ?.2#h80)ȳCń;F!O`fEvL TQIh5g:qV6idD9Id#Rx$OmGR 2RI+BXJSyxbqB#@9Uv&$$=u p'EO[ɉ齢I;zہP6y8u˓BE*vf|H򧌡?w8F6Fn'dy6cZsmNX5GZ8]GZ4|$Tls՚9aT Ǩ&TNEZ'aFa>)aS0!IP$\$\{Fu;R9RHȸFKcOygV*g8* ]Y?%D$96DHi_f|;0 C&RnؐA;Tލvಞ֨a1/xKp%7P9h^oJvۮ3 ES\"Z-#lk%-_d~)Uo;z̵ߜ4adԞ. #އ2R$VaVg|[AZ6ʌGp/xiq_}K\}UB.% ,QE?Khܼ j L@LWEgб|yaaGC㧢aS7pRWwrT-YvgDÅ#j)8 0? mbfk90eAc tAZ'ݿ`{6ZkZaC91(-OiLl`dc ="fJ =d! ;QSagqJ"be ep{J= Y4=୵z[azK@,n+EjMFpb"yZ?hP`V#4KXp™1nFA{LA? `#Eʧ|nXN}z\TK*3MKo }r /_^]M' ^2jAϏGD-4WDIz}?uRuA%~ ՃnATAkp37sx EX\=߰a#at=*CDjf؂7kXQao3!klO(9fd&lg V2{uV/y$VVQk?rqOZY?xX3 ?;3LmXda슽W~F^5VQ4'ϊ,xИW3S( 0ͤ,M8_1|j;_;/*OX@¼@OM˽o޾n>\+ޖFЋmXxцg~hѷ(!zXIGwO-װ~CIzA݇J|X!0O |=踣-=}W,ы.W L=Fݭ?Ww+٧uWzUwGt^+C)5iN8:@@湈vyo$%uvۏsix|W=QpQ|:.D˟e`>7'b}g>ߖ5Fت<;Yro9nߖ~<Oha2p //bjgW ڲLĞV}Y㰬d v~ sMPO^a8K4}I߀ʻG},<: 5O2<ӝ}rt;\,B^qÔ)o쳛h=Q;<;?0ch6I&r:MV1ZUL+YXJ~npqHS lL3ɪ{9i?̵MbҸ'85 i3ejp{>UyKtIgayrE d (ݛbsG5-+lGnv5vaR-ht0|'k:8\-T+ywY{1ݛx{]C{wibk\ρs9xx<<xzC')z1<c 1slsaEչQ{FPpU@i w-3qˏ]id=U1i0ˁ-ֲ?-E\hлxZ~uzB?jWBiOKp v=hy=q(Ar}`Gu:!sHv\;|6`u:r]~4f{]-``#d0ء38 ocowR&+:S^2/`K8նľoqSom`6e! Cjsa) >hQaLM:>DHa #uOa6`Nz/E`s*o9^_ :ɺiK4ueiFʮ-zz];ذ*=>ZSn!-XNpd۾o.-=m"R^b}/6lh-̖:B\  aRvfvܪ+ @gBRF6Ұ;nRzuuº:0MDK w04Wì&EQjͶŧ9o"k^կ7`49,8_aO&hYūMxVbj56BESF/?QUA<+pLD~ZoP]) N4[\S/}(x!?y[}Jq}y ϵa3c᳉>WK+, }$a_Kn 'iQIJW@ŃEޒnWcSLw~KXƀEzK55GjO f$o "})G[W[K?,l]R;`!S~Xse'j Ǿ%';[e$_q ݢDΒ3y\\ZE + y l9QU~[WUW_lQ<)~J݅2' GwUrYeR<aMb"b} T^ɼ:ק3l%Oc$atPvYdN|^.-\,0ks"ʷaXĪ]t}>.jvGHu$|v0"a(S' H}G2w%3Wcp[,XU;*-Qu.ή"M/Zŧ+A -p22-8٢#Xpo~\]0h՚Y-NRAH Ihtap|՘=VZη55I (b&.HUL+=YYc-|*k|Uu8K#vkdpbī|h2(@ޡnFOfݽJϥjMCd$pF}Ƒ==S1zLŁOe`1un3UhHN saCR zwYc*|*SLeUs]T^rCeӐ)(*\CIB9BcMH왎Ac:|"-OMl:P Fmk|b[2w/opZ*Wos{W/ͮqzYnԌ}bm_cۦt)REۼ 7˥Qpqx{|"-qAoڑ?5Bm )הDÈ́+c&*iKԏqrc3bu(ln1!~'FyD1$ƒ7:*GzxWAĺ |*k{>|frqق-R*Z3DD";5bM>wmq9z(썃4oG,͌ː;(7W42pu =B #5Y(!kzw(،miHIkNr4{"i\Irx 0k{>@0Y{W2?= Q͘y4cϜ@3+8gS$ 8bP&\?ǬGQy˹=g9!NGJPBt vs1K{ŁO定W|hfd_@PQȭ*vGߑsig8V3$癹, Y2@8 |"XrQVW2J]ʍ2ґFC\ocLu`%=b'2U2j3m4@_˴jʴG\o!02w;|7tEK ^CUXjQl(dGX(Yk|"?܀\24ڋ@Nw bq4AOe,gOW1e)Hj]\j XLD,[(L3ꬖc"Z#]jSȔ "7#ˎ'G8;ruӓuivkc} B 'A>Õc<@cr qSYX-9h7O\v<qT^62fY>fŘeT)z3t>[q?D< e"kQϏ؅*M*nFyq\Pϻ/rI;@r|oDn5qmɀ<&  =ݍ6O9F*-a}i2KODp6:A/fm8  獅6&ـ>DGKt (^'Q0VϸD5?sM't9̕2PGw35V α"lӉ1bco;65}S-bZHY4JbR@i sW GP8!пY(]:ЏnP,IdQC}{ʟQ]W_*X%ͭwЂ1;E-tć˟qḫ?Ve}B6aT;bb3qș#MaT:-?[ߛ~&t&tU8>h:vNpHƈ~Gh+g4x|WM@M P@3_GˊD(2Al _/zfTP1ꚵe rR r4V7(=6˿UH1ƉN8 tctclR(ghіx'˙bI;rXf;_oC[;-p":zdu9 1S 1G%sIsU |"g2nS %҉6 J;ɜV< ̢YT)kz '1$%Fk2 Cx,s>` T74,*c$ZJQH2w1K"DrQ(֞**(j @Z!\E!}v6ˆrU#DCh VF .@ԙ0sґBwnXGX |"˥G't~Td3 YԟՀ۟Օh)  VF\rDW` #T$Kicr,s>|WaG499(,ըC+`5$nvI^vGTo3)or k%S<'>.׿TC8r0SYng kFflXDx 3rGΰٌYc3|")s_1CHB)KDJ2du>LhFh |*|7_-Gm[|eJ__5m,qd=Ҋh|\ t^JЪ1iլID9i @t!*g5?fQv35vѝ:(+$Ѷ3g$#n 5}[8x6 |*yD:Ӻow?#2T6z9Ͼ[AeN#Bö`rq"IH |ul a tAxj'yF /}ojϨy} ba > +~vlFw Z5{Zb!ZwN␝bd 5y2nMK`>^>38ۂ\#PsQSY}or 8(,3:'Ve1T*~TtBjRg,y3-1 8ݬAiI1q Th "mtθߧ[󖗼BW5j?Y2xCp % 0>Ih y=ҏY. |*w]ܸ7RPFp%-ލH T\ Ш)/>n"/@yEW|L@@FF"xK2{oDQS,d}lfQ3ydLj~yBf3fu?Tf $ƛQϡ0o5E=ʜfd| CZqvʩЌwSb sWi(-N3au m<[tFɒS+pØǤj̞IO79^󟥪Vo[q]{js63" _J3~}w#6ψA8l`HJ`zw-w4,>$-O2O$;R5+GmF1 fF͚bXxf̝a꾽fUο}]初-BGHaړ5`\(y4 }GS&}ek-ikPY4*тt`9@/`&yak)ȼqAi 'M8A@rT+׳$A)w4DQ_U~G+~ywϯ#*1=hTGXX=MVQeշjT}_ڠQ]4`3׼qg ]|6(Jb/׈ j΃D{(Hr?/_G$aRuhT۲Rn!TRQ+6D+-2]-qtx`͝s/Ov|]1ixbfNqj\z|%m"IRE>5A7AOe5Q[̀նXmȚVmmngghn#3fi? >z`vY ^y?~HdlUC ?Rt7FKjCU{;S`D401./]=y{ZUjFMSm+3 MHcKS1}(M˩EFQV99[kK 1n39 ,5*1Oa$=r|ƄKkPM"D5"Rmf>fS֫5|nL,ځm3cWQPuM}MmZsHڂq  p=k㸑ܗ3l`q^p$.Hu{m{,{fWjږ,egKUTYXe ^&R6wY]v JS lI_ݡ92rA3$TZO^I,(x[ E jWAfʓ[2n0mPp}qrz{'FW GeofVTUJs-e%pt' >NÌ':QK8l~)y9'=Pa!Vv剚Q;nu:i Q>,ɰriO0v ly*kY*x\$М?s+0/5P 4; @E1&v7r/= /-z!9N$x$*eJer@cۈވ~)E,.E M=Ɍje9 0Lpil]]VOc / ZFS𶬆cZú|G1tnm1֭rIKJs8sAi1;b8 DQoS^LP~u;v؊$7!NV 78{;IX zv R !ŗy| (w-!OԠBO+ArD=Y6'M}Φ9Fh8p9?Lڴ0<!E6U*shZqDZ׊~);̋\.x$qzeymg:MN-,Mav鲾F\,ɉ^:}.q95]u8K`/eyWɋ]Y&^܃GiTHxf$өQ*eMcP"pJ,G-~Ңm hib*"ښ$L둑55(KY`Pid: lS~a@t\sNE3_L3_/ R 3mL78~hM"Su^~M[tk1v`)xR8\-|0Vl|V[ ؀w#Ҿ4q;wf[HTG²auF$MHjtY q:Tgn1jN:r)x)1IԂqr戢 [7TT/cUtAgAZϋa@҈a/I^!zP9touQi8$Xґd6{fOWI ߦDWJm<_t 2,G1xIy-Ȭ"̔W$ʒTXHe##1ʇ0d^fiRGwE5oANoC4:fz?~.äB@ENDXԒIf:RG|Jv˞mT{ Ғ{둟`֤>_&s'e_~t#_.]kH1qӧ2v~-:od IjXmlJ1 }m j:cbUGRy;][piR\یHy*uE1j^͓(cn(7z6VaY6jmW})oQ#߲lbzXi2imǁS ᜧுf KuϘ48, Ò|ճ0^),q0)^&%{3ZEίfi X`D %eI@㴔jJX}Sb>/- 7 e3v,٭CesmuTF(Zx(k >N"9?pgAcZRW(#xu>Dl1si4[ۡqqv03|Z<߷ l1sTr}PD#N/75oDʘI$fbU+bK\SEM;e٭LhA?B=mu`gyjPr[NЄ_]g< Pňht̜L63kP63$5e*^n1s89s=@X'7os*"1sXf@DiȜ"xeO Y96]Rplg6̀O&F-T:j+1u >nqZ׾o/~< IRs\Vfr3X7>hMY;2;}}uuҢ- ؝2$WY*ް@ǜ}K@ܧ~Km@TVrѫ_m:J!Iht~AijIt)&N 3E[2uG+8`o}%o}u곯kQU[^wY"Izwq{q?6oP]~6H;~}&ㆶ]"XƓvLE>)_s~qe{ QwIiz/s[_oS'vXqXg,z$;mt9LI. ,,L猳M4:f`!I:E7bѤHJOw&II2rcpuоMqǜZNXÊ:ܮ WLmnpq'TĮAg``CO׼-PT ޶@ǔgӀ+S!PNEsHia:}M%oq4OT*DRa(ɥI>7Ε|n:M(:1)~;fmF/c"XO_~>O߶MOpg-Ӿ;/ũ7B} [VKh5+xyCl,ރR_,^<8&]`1fH ` hӠ߲ZDLԤ)~%vKzO)|wc\鸀et5LJ0r ~[M<( ,F/a:Z\, 0h40sVxzVl<}^uZpSWNwy7ϊjR^'t\ޥe~/6WVsQehH><Ňj6ŇCc[E+yc~.dQOIp7|VbT:q\\])yB5AI_ҘFk??uhU,/ l&`h}uTOF;lP܍~K弸?zg-AViRDDv2A*T0xhy\2x70-%~X=&(V,wrF7w*9|_Q9}'㴜>Te6Pfpb4=A‡%PRر70E(33 ?h? L_N3W?K_\ӓV\0QT"v?v0R؁?` V$:L%T#hp{n>KS}s}7MVv\Z^'%(=^ z*U>;=(k!bmna&-W,QOyX_ZZz[q {Zf:(jmJ( dBt.)g'f#(ka0nenٚkBھ赅U_fO+'IH4I JqRUF9mVnaGwD{Lz~u/-_"V(`q =%I%&lFQe! Io?t.fF<31Jw kFٲxHB)IZx.a& ޛ3٣@t=&UA O9[qxZ#\k p<>9X QXꞎrO%-Ws O@mZ [x@D=΅zƝ 3"b4r,Sds>2{PAE% TøD=y9:?-=5޽!Øqc;[egoqse^}hs҅;7 DJWQ§7OIF%WcY0nLOf+˸DeLE&"T*%s1w.x>3sRLkIQ&! Vw;W>CȈ&Ck_œ]: yRnD9|q558 F8#*#4g؜~-Rn{\ 7yn5a4f޲n`Cqz9eĩ}ui)K{}|}_ߤZ.EU0ъFZa_+Mb`LY8>EVd yym !5dbmAǴZQ,Է\㔵",#}6c\jię1ՙ\H+lK)mR>օ2QPL%eB3Ĥ R8l?x$ኺ !FcL¯ L"0 C"] :H3Th-њ->_88V5HTlqU::vsCr1vZ?3 prhBbw&<BͳN5'Ì2c8Nj(D$SZ~;(jlJc f:S_t '/8K/Ff;[NLN}B 帩/V4hXăW`sO>=/飏{}-{e6^n]I[; )R;fy:݇Y>^R&I%F(#`*qA( ][~&zkJ48vlL~H>!r+r 3ɐɃ3YM\޼?]?Bu_C-ъQ$JXMZ Hf8b=VK։!>#v *ȐqnCU%⪆sj5\e{uXw- _CPp! jd,Ȼbqbuޫ_`ɕ˩_MB|o60IuэQ*Tt&y="Br3NC?oW etoʯ=v(¼Oq{SF'WvXb.COjĖ[NrU'^勀07֑֡dQ sbLXU%* j﷎Ă'/n30eĘ#\ hr^J.DCEAlu1X]Xt~~eA`:Fa:]ݏT ɢiG^jd勼 qP&8(2[ж e~ˏwc]Bz)ppxR8yK"sU5z݉ƒjk a؝DfPAF i͸Nbخ'e7/wC1CvƸx%ڽ]&@DvN1M8նo.+h)ڡXS'fc 2Uّ('k@lG8ܴM' ͱ` /^COɗ0#8A]L\SY 3Q\kj7r1uc4q*EOIĠgt+N1[8^uUJݗWʱҩ02E%g 3#A}[QzCo=2AhJ 1y(52b:y"0chDTR1©̐ʲ0` hJ Q;mʗqqZ7eGnC|@+V<U>Zh6&}Rqa!](_o5P9R* |{x[HD)rfζBp$]S'߂IdY.039F2GEZeXi!>O 2aLhGp{6kw6lSeQj!ޥL.]0(6Bi|a$P(cEυ'0ԛ` .0 UD9!o)"z)b D|sYオ[ˆLi7636eJ0) u 恈>8klqpREQecY0(db`Usf9WvAYHGj!CB!9BFn (tsU(#!m#1i:<3ST#hGTxq%_'l|*Oi07gh\;{|;2Fk$l5@-\N9WhXf88a0&Cq* /WPrl\u8?ӻ\,sH`(.&NX\=* T 6J;Q̗ury1SCpuxTw:}}Ǩ 'g"lB`6W4>ViR.Doue[eE0clByJ0C*Pu(_@r )S~¬|u<`MH4N1ıE*r}r4J:QQoi-݇"Jc*|qU[\uD9ܨ C %_ȄV*mC '8a>Ic땢p`Ĵ߅-6ޕ./7 U vC A-lQE~~k1."茋׎ :{l(-muԖ+xf;9Ü%~_xd˿ղ pz<܏M˻0kI o#3.uY4\wJdr.}?%T;{, .K$2 *~ l@h[Ƿ4"b[;x/=;,pu#B9eQ"܅ؾ= W~oV\6DoGɛ t>떻mfԄ/Rr*"*?'o7K,_a UϡA;~nI_?7xfvUqUfz5giﷇ/|Vѣ f^LM>L˛|XOɯ{'ֻ xb_e;|ShWМjKUc?mڹm^mK2#. _,`t]Y/FYX:8xǝ5]D9 -+\>3e!.0e^ 9|oS"LUte~6?iG *&"ĵɇ_C`=.`!:j4 r?0/s8f_^zmdZO"̠wz1_]n6GUTUžz ?4kkrgbnZnH7Kqo6 |V}o^>_Oӕfk/$U٦wU s7tժ7U^o:o NongdKﮚf!t2x{5 vv:V&&?.V%4tSwKɕQCJBCIM6IB*0Y\syD`9UH;xHJ!T(Sh䵕KPF RfX,jdE*sT.^97ljIrY،9'&& M1Vw!N?{|D诃D!.?X1Kuoi4(`i i(\xg2!A=;)oL|V`H]Tpe5Mz^7Ιܗ(ܤE;nwcۗ"upJXfMf[f>3ƘhB@ D~l.nz 17 ẉ7Ϛz-ɘZIFn5#li]Ŵpk30q¸ {%[̦ۘ-y[qfNpAqCF Y1U<;aJR3 (&6 V {_lcl=- {(X{cTKZC=AR D/*Qis"O}J'|!e kքgNl<&,S+όqk\BKgDFf:t= [ZתyWثP7AkEHsn rvICU5T3PubY2)2(R6[iD>Bᩐ$I'>Bwa(` qվ2S'0OCxNT7g9L tVKz(,Bᅖ9o_Tzxb DE<jgaJ*d\@y Km'Rx&O0)auX[H4BFp e-;\Kpΰ bKO9)dP]PBR ♍5QΘgX iS!. sjv2O#hUg L6ju8t3VOP ^!K{e{Hw~/D#}>6_]M}f5&:[@$voC\ ~v 1=Sqi"u)曨Vla4~  C' I  >^F"_AS6zEzeZl0D=QޡgtR`C #_"}F~Q< kN~J'(ZtpRԢ,稚/:ΗW͌*3DNKBE:KԠ"28(Zɼ%ZX|r%y'\z?1B47A|K6_F vCH_y<.[c0i^E3E;FX;cݤItGap为8v9• Bim $ >+ec[8w5w_Džo>5] b޴, CÏōؘP]3P,( (r\X[C/od__t#j17omrc0n ׎ ( ʰSڶɉ2O0z] %LZwgBZ`*Xh#vG:ؽ6\ 4O$DR).e1RXf躉kж!.[>=\ibZ | H2ogD0;I)V<Uf1 kc708"Z^USCRWiCA rtaI)F>%/Y OI&*tx?b8%oGQ{/e$g,CـfrX xG+70.lhRmm #{!%PILR4KvZwnB,^1%M]1@o(JX;ӕ13`s(Loܛ>)Ը8<]9jjءv>f%}~a\2d_>3:DC39)e&~:)BT'm^0`b׋ bD"鑴Ѳhl2OֿHm>€G#%-x<2'fmi<{|~aPd[ߎ'YQ%vJ&2:Îu;_ۄR#_s/F,SZe1}o9鮘I"ÜVZM(dHC;"N0|5걱hAA6rT ^K "kC֨7b͈[E-xBV#8'$;8ڏVL)008*CylUއOu1E JPR"1+k0%0=Q2>&1T&bi(%xS 'l-hC6epvfRܑ/ {_ߜtt~vGaph8{Sɋ$x/E~LO¡ E{I)c/yὤ02[j3# bK $s1%LV UkQے,+ ,_HbLN,< #pK֟NisFf:̸FЪP]yYH/Pܢ),1k?ER1% ͈z ``j0K#)qM( nU5n=jb^gP wg|;Ǟ|ȧ0N,U=ARS6p*x( RRiiflV {y.>(TtkGap2DwwjIpw' }[56jðlU4kا_$DgITG0TLc9e3I]% ƀQqH1( dɤ݋drmY4Sֱ< usIj]L!0~ZIQėEؐ97hɪOn>AV=P oޘteGap2vvM_̇'] aE{<\&V~q.Q)F9~Mʹ3KSoj^.E` eTK*vO(x@ 9gd _7?_ڥ=]'J7 Hm7v6GYX9wwqdzcwv{]#$hBEkѪjFuwŽw+}ӻ]Y-[;_u .]??tKGp;OCW׋-:ha.w;2(a ՗.p4J h;nwfz$fӎ7ج@?vq../"IxLvڱ䇎X+IQiGk)gw} f6u_Rr \X*@%bZ06ĒUAq gqWk鍨JW;yqvW> 6/KﮅsPB0}/ ?b|wf;녇9UaXZH~vsÍ!_f;KzO|wfvظ}E3h›:*Z Utf cf#4n>fB2^ΎkXv쉼߻/?Ȱ]}-&牓z{AG_`new֜/!eon]t@,a.bu<:7bζc$l_ݒjN=LӫZܒx3%6n۫# 8N0 on"!suW[1A`QƱԔ 9Tnh)@!ߣ2 "@! צ]7UcVW $P؂DcMx{cW<1hiEs/f'EU#o&?fV粛g7W] 4C=IPg7(|l6fG׺aUpx#~սN177 x^;[\ϗ8z!_;zW#}ZS(b{MBkTx89\S&UXv:UO}[Mߤw^By::)P^?I4r3A#cEJ,Q`(d^  ίڸjo!6拥AJXDTwB(57#z_jI*"ck!6YȚגs(v"FkjZU J*jzAh"AIGߞX!bxl\W/4}|?EJ0IcJ 9P)qvN6>Bᕠ)>B5|J(0;i<:>|4F {ϔrX~@ YWg>§P <*{VJ(0L2 _"(Y+R!|TWNKCxJqŔCOE~'\pHrD0cJq'P!<$w9D(dcP9_f]Ϊ=jVk˸QZ[5WGD%8r:NPn)Ljnt),jl/7AS)z0zOϾkdC"[P4^ԁeo|r*=E3<AXǔCܽ 9=!R <-| ɽw^akAR{U^ֈYwV9)o/Szn>-| +=j17'P"VR(^cZqR *xZDp B!}= | sC=F=OQOQOiiiicTB(;hvu?g\S)~u?废rݧälXλﵵRkXY[oB|woV"xkVmEЁ3Fx pG mx$ 0hE.G\#X>t_#wsuAX1=E}ˏ?<&tϟs./,M?ٞ!E-*]}saMoޚ tgw_3k<Ԕ2ϰҊ Y☦yNvqs8{PcVwX*1P Ks?|Z^v.{!ͿϾ}{0j_ Ys@볷qi`GI?}{6d}spt5kp?񠒈5Nm Ah/JP|3sÇ庡a*{S~x̦$&)/XMuֽo3 =f0KIa)WIw|w/gH_6o`k-.@QǘB;k\Yц $gΊ`j[`;d@ . h#&VPLUbxݫf9=?yl (t gطwWRQ쮻y~ޭI՗rP:7=[.dv9 \ v2Vh[@rH m|i 0WE1~= Ek]:zsQhTOV76Տ3;v| Q/_o~zy?Ys8K>{]G`.SܡHxj{Ҕ4[mA}-֩}5@Òns|yF"{5|T6D_/'! * &SiخG.s@A/ 6M^pW*KE.b0r1-F l/BNGX+0rJCaˢ`Ǩc+)*0B`q!Ђ'-ZbȣZ32,E &=,TÎ[;Ӵ J~ F`U/ӥ&6*+E&e◩l>ّ}'6ttԿr5=*&0ĚF*X-a-|w!NVNp-$E*|D5G#P/}FƣcEǥ껩nʲ껩WI@<՘5qd:< Hf?/?!)b:t`٫m?$|vPU CUzUJ9FZ\ZmKGH\d^S]8,EtRkAk(ބќG9Ef:iWBYOe)H|*Z|,Ҍ{h<ngáޓZ|ptP^ܐgur&Fonv*9˲kO4'13|(29,ii|wr/R@nfC[2A-C[.֣40J2r͕3*(3ɍ_\]mj\E N0}Hl4x+ӠZk[:ڥlk~i+@^KL'U*2L'U*2L"өD|Gw~ (f&E1~(^Vcwُ~^,mV!uL1zSrOߪN:> UgA4Y> dN?g?Yc o6,V'9$Ί`jS`;d@ 2 h#&V J1=SU^peu<[DpBo1 *EprEp/aܸY>b9?Mȍb`b͟:7=[~te?o> ZQ.; - hDOH$׆6Aށܛ 0o2ymy;_Ϯ_'/ٻ想uxĊ-.͇޾ܮYxj>3`]7otEtTvYgsZG? Y x4^Lt9f*AwS] jXU8"*/7?~⼏9{Oaf1P N r"٫IS/@oA*iF9f·>A4?nTvyF"{5|T6x>_Nm8w,8VfGӰ]\_n_F}=TmF4U.n!\oQGA SXs. 0ƵRs"B, ҏx n9FbȘҩ! W-xr,<5?#RJn F`U/ӥ&X T"2{qT~6Ⱦ{::ZCGdu9ؚ뱾&60ĚO919IRPkcp>Ƞ1D.ws٪ K ef~^#gvX |Gbb|J)=XH. ,c-3h_SBcm%#aNo9ǏE6l0mkٓ_Y@1$kFy_p3"T/q^)ͮ+ _|]c L!l4++?/?'e1;rd`y1Vt烠g,.~pp瞠B8\ITxV (ӲDӒjU"n@T ${hΣzt>͢62J4co61tVtid +1jv&hV#.nn-nF+  sE.hrm˽y7; `N^T<{5x?-Rqwe=nI4u>kڻ/CȳI}3x"Y:J`Fb_dEZAӜfHI(ܰ`*t{M@dk o_Wp-f χkU7 qaDѮlVg VEW,;QPyu(nV1xif{b*h RToqݿ#Df?d'44hDzͩOϓ5-mfowi^V*PZ~ |#גaYwn9׿ >Ơktn}Q(hI<_.ƋF D]Lgaq%<[ĚX0fC co+t6+VԊdk0rl~$}ck"fg쭷ߘ7xY:6kcEko:%ّ>7 =Zy{KJHxw%xEPV+a<:dK@VINϕ']hNO9Zz!Sm^EZ"5ٌ6ڜ6[L1s=)R61Ǫ<ĔK en]xZhs CI~_n-޶-NI׏M%>}bf`{K8FJ1r h1e{~޳}_K8vF4ђR3!gUJ-$ƌԕ=mHtOb$kxobI:X ֑j^L $?gc4ђI D*L2uc5ϗhI>JJu"뼟c+etɺX}>|_Y1^1 (W1ǚ-FC(Nbm5 0k"b9&M;jcҴ!j`$kcڶ3i{ v зN`a&6q,my3J$U}t2F?ͨ01nAYRe>qg2T}n{N G=gJMf$QkiOlQM33_+|h|j房rlh4G}!7w"?bK? `ZP=Mgj7AOV!;g Qf:{^;k2^,fi0 JD7LZvXU{Y35O?DŽf#oNv /sFx}yNk/j-"+(=N_Ld׬B'r41%g?}dK/PU&3ʺF'dRQmw$6n2xWC\?fFHئ}ng+9]?LH@:໊}g W5w֯ɸ ElcNm#+ !wT&!ÃѷKLzd^77;DP 2% 5!lf<93:J}Aei)N\VqYe%^h.PIa2] \,/ǃBb-TL m IA.D =Q_iPogUSJUIgc3p.rYRl#ֶz-]*IjPk@x` aX%դ+m7z ?*zM.t5N@oR Kg-q%ݖw[nK9-UNm-mݖw[nKm-q%ݖw[nKm-q('q%Z&q% tI6w-gyEB5O,2$Έ:JßL,*ҤjgR\r69ry @@ԡ Y)kK do6j(#y@ c bǿC̫<&ai9)eb`.?j@Za8V5Eh)BHEЬLpWG[k_&h-hQEü: |C .F8oxtt޵3pa氯n$wqLØRArn~5?,‹={'VƉ(}v|8z>oV+n$<2°ʴ<3QdR5aa+ ;BG'v0p1TwiМ:G6#L1gFjIzUoKQhn\)z^&5RitU eM/&].ȦF>DέGz-ֶK=:UɲR*̰YZ5*נOFGŸDiv7>-) [-Wr-̻.}JAH!b9u4U zzEOO.W(ѳF=FPB>5`529fJZ28̈WI)ʄbM[SGEaEi).݃>MmaݦPקK ^Cn hݪ˯^Z>4gqYQGqY;ǵ.Qu@}T QN(%EJĖAT}]*2c?e̎qvMVouTrdiuZy8k9^W?>x?H] |ܶYL&_頛 Ž"=:O^n mfft {ץ܀|] mv+!KحȼUG~y _ݐXQAk%m);:3+w0aHmK 2we7) rtQMhgzd_0:8ܴS,=e1`;%@w#)v2E ̒]ѐזyLuי'0H }#TJu`Xar~osךUu0dOC<s!/ϊ0OeAi(xz:m9/7ʇ <-!FjJQ~LLE_;~P:4PlUlzu קe:߿߽ 1,Sq̣MwO7oo8ezC3 Ɉ.Ͻ`{n-t\E)d9" (4L<%# qjy|R 75|shPvH Tއxa3kY#AN 2; ruؿ>?Z]Մէ2] ;"2ψ2T#!Hs')U%ng3! `8LU3&$tXKK :}; I;&(F:ח7182}kɮ^ kx `H{9_λZ1EHkF%ło3hn/w^H@(&Y q1 KL)3^Oa)w ET#䍗(+5[sXEq0J"7A# XhM](˴:2 瞄H=N?hC 1BG0DSm&SQ;7i<8:)1I,PJpjU#Ji2-#LYdjgd<i9UH!!pFЙV:^>Z.igj'Rh= ZV&/)P`OqD=Or!,`c4Sgn9ȇw F(YxPwDG.pkDKt MC͹3X0bV ߠF&\p-$s)53jo3< } a63`J7^39;ؠF>ye 6\R%(>,(Z" | `׈`4Z`,䠥}P@RA8h 6€mXjk&<t R, 4 kDAkDAk,#Vnk H%PuWnwk9Nx+^;ALfgj]vɧY=ʦ[-@ΨfS}8>WV& fSjQ&MB>Z?a0q}!A[ N'c(uce,! )Y =;96EQ:iHu$15TQp "6נZ7ȡH^v_'>l6P6+ܴ˩\++IH!aJW&u׶hZ+A&,3A_ٙq1UaZ -8.ky~T3'H!H>z2kd3O`OV1^נť9J4m1וlpC:5c*ju[+_ =&al. EOCg7QirٓzVMh"?jFq)PjgŻWt!8CF~ɨբ b D Xf&z>_|ϲ| cϳfOiWMdQߖb4̈́^IX ޫ">t?I%Eߒ0nbx:9Cz{R C-7u~Y+;-xKb0'~w:U :W?ѳcU<}*!S۵ߧk]nLT~4_o%1z$]v7tR 穞$j~# E`rPw2' R'6\"+$.m^mcR@a6 P=3M5-j>S{bVpUF-nVwK؋&;󷫐nlDž"4`YpG/Z:]7UM :Fŭƃ56LࢧbƬ']Z Ka\R ZW(+e?P̚n/2FT/Y9 w>毸zsJ=p_0OSD05_ c Wv^AfZp\ R:}2G}1h(bw .\C<]ڈd4&M9|rJ(q9et">ac XMTqGeՙdvSok}N7u ߳ aή5,X˄KŖ $ ++%`%E{t^NaZogЬW|=t\MG~!wl$;7HiPԛQU*^;3_|$)02 ]Y9{l"s``4TݙU ȟqrj:DLWJ0:X_by@ 7`镗ԙ\`/ Ť ?TR b̔5/*hBpy "wbt">!L@_ms6Mj%QXb7÷C"Jơ, A*X Hlcy(Zrsq&"%:K-ieցL' 4 )yD&vlٴr܅:0wuy9c9F}FxYxXL'tXd";#Nh$#WU1R`)Ct\7Kf]ޮpz LPc x;(o*!y-)<=̍GAuN*al&/ (!Xb6?fϭq0cYDibNbVQEL3%ݢo_ J=/qE1߅ Vd`'WV{<< Y:_ ǎ!R<5QEsOibq,FTJy H>Ռ]4϶].fmyXz2f6·q-vr $;mfo^?~?L"y7\$_[$-g İV& Y!8b:)E֞qtU=c~%`|YR';vm]uT҄ DPpsfQV$-"”c'%ه8G=0Zg$aIą607)fup4l-!<бM" xNh݇XFeɘ!.8.b L`$HXP'w0A>d%i9J(i )i-a7ͰMd>,9uOx\iLDFBƅw~,!gTutH 8v Q;饯jfӂIA3/˄Qq%&12BIO?;T=zcs\(IȇXr: Fbj9gTFD%:P+e'(N4#T"n:A?[⶿sgy6]f7菇;l6k>jrՓ[0 GI,cSs4DjTd*pd|IHd8 A7 gb'-2#DMOz/,NmcI{cN cL4D &mNC7=Kmz/{㖺01g@\BoN'A{d6nLe k]+i,^@6É,>Mͣ;Ҷy 5 Z۾=z؍ wڐ'g_3iUQαvۦ9erpn6@Hc=Lߦk dh$z B_C\rcQj2%T(:fCH={oXSwv̻Х/3|.i @H^K^usJHXkqdT I1C(RÅr׺9JFDB`1!#&pRkeLi@kH ɱ 'V!z;jn~Y;KdfBEzeڜnaB3.m|v2/6W 'UiĜ@[lvJ)SIR!*rЁkB3_gH]h%[CwD& I3"3Xd  Cȋ`|]`(`͞YiB܂X|i!T& J_??~گgݾ؞/Z&3yun1)`C^}{+a7 *D~(HP\of moj^ˍTF(R"9B8c9D9p%CZ'VP n1!8Gr",šOO(eP 8GN#M4HgδCd*Q&DXi`:ڎ2.sY&sz@-F'bt9BG*Ҳڻ6t6O"@gN"8/jpI`;W̖/qB 7lF=Yif4dac>cnPԸy\|D{s#]$4^CF[sraC N=^OI[]6;7k0٬I `ަ@)vws m o]~QMc΄JPHwmY4eLJC@f$ $MRGKKbSGd7ܪ{O1p !!蝱)xʹ*peg9@HLkB<.hMĒ\T_$ց}2 Z]j{Bb́[LJb_WkM-hv̵ζ@Jl—BR_ K!|)/—BR_ K!|)/' Ovk7qKꚏ^5Uz9Cz֛T4WRuOӺZ576a(%r|>\a)2/QPΘS8cJ]~)12PʘBS(c -A& P¬ NGFj-$d4 <E:VBRZ YK!k)d-BZH$QUg La)3q0ƙ8rg^ZBiSj M)6ҦPJBiY伞`~M`b{ĵH3l XݓD>U\-=A814)-Qpbq98:ĠU14q PS G~ `- }BkT?~( E<5޺3t <T{6"?Uj/|?0$>D!S ID9n53LJWga+z lE+5Vy4v֣x1:a9-Ա fTA(cbDmbAƩLI-,X[h9ϝ@+xѸw3{.EpniLF6ى^O?4N?cWD" !K,DVG#$Kt9Tx,(A8_icg.Soq k'Y FeG 4!|IPs=$ 7ioykl>fjU mi}_2W a/τ;:l~v;__|xdݎJFbWK8ʀp.qO;PZ's٤ʸPB5-\潅˼p.{ 7Dhnpۨ)4)_H"+J)%pVPʸq:7%r_U]XYef|]TU8k[a 6Y{Ckd {?|1ה'LG,Ёxx3hqگԦT?y>JB'IkT*rɝt$ȵ0'(ic&Ypב%Eq"Bkaj$&  x!>h/}Ꮧn4>F?+"51s9GAoq%)a !5/#/4ߖhUD{1Y}B×3 ׸'T2_{ $;YvyBj8-r ETЂRGe=y=APT 'HI8!$*&X0M} &m&:~89lϯWxBdm(YUm5Iiq:ذ=˧tحα'BGw#t#U8/0Kf/W3)h}x̴8@YKJQf9Q{4Q8@8{H"IV]Pr).'Z>)c^`V bRPsFb .R7^'1w@d DkULQ&5yۖye9S:*OP}, D3GZAgٞ'VrrڑOz^vo0}OWfHm3l?plN;*ëlT:~BC.j[C^_R~wV fwa]0~ϭ}Â^]>__ E+o}.]~^i)*L˖)zszCOLʇcqTQ),-qD9"x d9v7 խm<7anCs[U LVoW:5Yӟ '{!N^^h ,Yxy'.FΏXE)X)|XecZ#һz.Nv^A.ks_nvaNn\@=-Vܝ%bVmQ?n:YH&im2kQ{5t?G{} ]um0{ӹl$/4yf\o0?ru.pcM9QucpOMyg-攘I#&zR7c~;=n-Tj9]f#7wuB ]P)oG '<|noЛ=p)؄[r8tO^O=I3Kq2胱ܚߢ]IpCKx 9F-SE#B$/$!% QjMH:%r=_͜7hTr)(-iiPǑ_g;8ɨ%8/OO&b'F%=1Z׎q%I !`/ʼn$@Av3B}fzD,[.`;-{g-͑HЈɰḾ|5;i8O GyA#RZX" q rܒSQ@E"u_Rt92(FGτV()5έ!LASLX`"3F+BaxWIOP>,C5[Oz6lml/PHPOjLPUҠ|, Ҡ|, jKS]aCyJ|`8gqGKQ qKAs4[4+CHd|I3[Wz#w Ҋ2D:RX.x{)K[}j9v_77NQ4K0G(fQF]`RyɝAQTIȍL>& (^iXRY `*8SE>8*P4R%ZPBºDmTvaX{A:f4X*5 ΈIR)+ keD8KbVsFC:fѲhZѴiԴgaP-d BmFUbkӚz ޲}$6@_OA}l0790}?})g jZ嫙aeynV,Oz yFϲʏ@a3MooþIk>P;$J F2WLOٙ1(#0ɓ¸~_zz8~yg2UYndi^%Qaʌ2՟0Wӏ,>PŨ Pg Ϛ7/[zk? ܒڝRdgu.{/8^L[f L_~|f cpZ*;j[kۗPƼ*Y S՜~̍7NmTͶҶmekm]ui';:%۩d-NZu\1|ڜemyP4w#ߣΙ\c 5,nH(<\5ϸ3#C}kvy ={5OoZx㰳4KY2A q2Jp䵦VDCh3VҤ> !{Q?&lg>r<|/ȱ\Ԑݰ\H޺7;a.˳R7*o7_z̗%g YTt=O:y{Mmz|vT;:[xp׵iǕ\qbqdy1 '|7[hiX,UtTg^SbE:7aMf=VNn &;v!Uq' ]DWlKB&$xe7 s#zG`HZ_a$yI}TBOHk\͊Q= ^Bmt*nJ>~frwi#+eRtJ("rr?]u!/ڽZt}JhfjЊcۤ% ɇ]829\ԇ6I99*.:cьd!V`q!fKaͭ! 73Lc&[ůoAEĿ(~_ Apï {BK+ vK5 ԉ~9gI FyuA>S9h LY}fYiVfY~r.&y}&@PNXe>;+{8I[;y3sfv%}ۏv1G꿳7XxʘXBHx[.F60P[AB̢k~wjaNjt^VVG762G,:*`nCDd2;E5kmVwP `(_%uV@oo(;O\R1f2pR Fp)$JI!A(CԸ87inc$RhآA%ϼT9Ғ +q0Dc󻴰֘/Uu8t/wO\usU8 /,j ,r^)y#o>x32K6&ī(p.IRdo<`G"f4?}`6 ypx][o[9+F -@?{ybE7^mucə$[YTZb2 Z 9rFXJ-H ^R)0$-2pL  2!YmRM+L* + *+SdF1ف FRITfv3Gbf;NjkL*8/. sBb`i%d $'[mCO< PƧB>䬑0!h9(vyNU"Du?pf66pB/V <;ɒA}@/%X $fz-psZK$bgH +iAJc;%Z.ǩ7kFzެ7k͚zެ7k͚z/k}"kdz)ެNMYSoԛ5fMYSoԛ_SZ[7k͚zެ7k͚zެ7k͚FQ7kͪ]5fMYSouZٚzXձzjZմi5fMYSoԛGz_b\M's/nls={N6<:נնB&hT&}w}F v?{=t4ki[vg';wP03Oѭ+/ 4lp("9gD*X4(GRcCXAitJ@ˠPX3z&76\x!duK-BĞczΖ 9?%ғAhȴOA;oƵMu32.;[2*t« 4čM%P Cu9#б#HB/:фm$_A3 gp0tEьi҈DC1I Y4тGѹc+P$-I˹Z&7WaU{ʺ0SSZ>c0hȐ )fzI>J]cAAĿ$Wi)Ӵ"0dBDT& y|R$\ܘX.eIx39q"H]UC'ȹJ2 O04 D# QjZ?0evї{Q[\ǢRY٢./EV.?>9M~,Jxװ_V✦#;_|k9ʖJ'p_eY~^y}ߞ]Q'>Jѽ x%2&ܠvo8˚ي#YƅJ_1C@٘΀ ௮ՅVB0^8}ϯZ?JC>o$$Udȗ @6U^@8=]ŧmwgAx7KHPM31b~rβhk."m}?5o[˗!d8nm.EM84JiKCI%;qYNo)1ޘfϦ#l9n'7,G:#A݉i0tnۥM6 bri=]-Mm -}jw6ÄlfYXޒ4jקibůFߵ90C*jX rhu\N\GJÒW7m:q:ge8gѿVJO+%h9zӛwۯ᧷7כwo,Οh&đ]` v?w;tA?M ijo޴0El·h׊>Ch۝k.Ln@B~O?⏃q(>Grtg52M>PI?,R̓^uWSU ČH7>hRm]{mGebtM<,-W$VrXK> $gdIȅ:I/0_V}юbE88u2/ȎLYSH0|H l9=DƲo]`$=Aczˡ8Mnw{MOhM&m4 v S3yvsvfW.gW+4zEA Mz0,+FB(0w i^p$/{tj1Ae׫m%:Vt)K4g` º!K [s3k {\ U~Vi:v͘d-ǫ+4c˓ _9ṹoA&= i;Ћy@ܰ? eAcRclT&2U`|sGqc[^Ct2.kimYmCܬC=B01EyF,^7DܝB7VcMaPTPٸLvĻ|î"!@0C%)~nشJw|ɻjR͍w>i9-GET6 cʂ"j+\4szHDPslowS}0~. h%~}38kہ63>ט{}qi4i/㗉|42ΪdS֖{28lLT 1L-j^*^]{wcFpO=gox濬_VҥQF[|F=^-iyIO#[==1ǶY䁣fbbDS<Z9$]-/VxfhAgUc] k]# [wNM;'a,h,̱,|୍ L1`FErC;o|1 c؝xLF`0{8BmnX2ӛgƶ"!-i 1`3@ZoP@lcrtt@4(V}j' oGypT)@J3| o4IXJhBLΘdqW(ȤK킥<ȋG[<&%ZgB,D2Ȟg^ RR ^&qܝW'PQd/kG/Lwq.&76ߏFχ| &Ʊ#ÔQX6SS~?jJ7(-6 4&:(9]R TT5*Ƒ2kNG%eȡlΉ.J!27 50΃9 mvFfL0E& $2\謏PF(NY!E%rlWȶxVx>^,7+l]7lk/ ͧcvOTz7h9lU}FGr# hU *nP2`PA@c +t쬼ta ..ʂߪك7˗f%L.G6.o/JFKbuyi\|6+06[̯Z >jNk~?ZT4>>cTM՛1 bD2DK nZ .IWqϴgS gއ_dUA;a) [D^]U-*9G%trA>vg)&o$^ٌ-j#Y*sųڡWV/wCU'}!Z>N1:$7lslɞmk\YU3*^W0є,_Gc[tifc{3=ysXKS^:4+.ͨ^}ʨLNyg5[)㩘hy[ͫx uD褢碯mh\7< CQz?i㤇o@/gv0LׯPW>Hdãs Zm+iV1a@e‘ptAw͠$]/I'_d{si3zujr-޺BΏhaJz^z2 V)xxxk hsrFwKwQ:s« 4čMF(,Ȓ6N6g`y at,h.,g&APKNo4ax;&ȵfܣw\Zp6xtϓ JQ>cӌ_8ԜVM7KW E^O??9M;D'mTJbV+rSSGJS=7A 4| 8O;s-FXms|ƛy`d0eqw==V[mu?FUfkzP=ukaBv3X> ԿRPh~fJmh^9]tu 7^#Myt,qbʣs;El#ֹro_䊇*\owB P}~ݯo>뷏~~g_ߠR`BV3CFzߵ] kt YOFV49~Y1 3[1:\}ӏPnO9p՚TЩNWp略\N{ը0*-X?L*#ߗxS3:o_6XI\)JYJĨA2huvv+*tq6tE-^ŷ~ \)c_iM 1[1zj鬜$gB#^[#Z p\Tl RAA!0&##׳Qp[ZxOK 98f IY{1`ge,(pD MHN> ^[b9$us/_@ uqY >Z#);4ȅ3T18-Ņhӳ!5? 1II2'8aRm6Ѿ1Tzm΍VФEm:/L@`4g*XvB92i,ZA_,?i0N5m_~V_}ֲ+dB٘0s囖oo,Ttb =-Qk%J:\LBA쀹[Yd򧕳򖯇zUDq˟VX #»dIkӿ&dǠӰ 3[lsw`YZW;f))VyZwS͒^W{5v*}hPRMxذlH6! OO rI-V-^Zv6H Gnlm<0o&M{/:삥P^;rAZ;/xUr=o"^4Kyb2 bk+Y[6onf76fʽ%2>XGS:{ʤFZ+-i[# eb =5_FT1τVM=|Ԑ!ZԔ"ن<%g@ʜi&BHrE͠DE2o#;Ao/$d6 v2g~jap;(N%ovdn;A^ԲB/iO94~U :2'01dZ⥷$s=%i U:LO]R5)QQDgYdFL9#&"))5Z&`nR1g+}}<*бf>s_R@O6- 3n_qq -\r/!gMTB9 '\iF+O /9k][!y\HǹXHrrnfzma<o jbd2&:%ja6v {hufYJ{˗Mÿ?;g9kH9!GU vxNT"DN3&l~kS=ۻky_~@Jf(:o NFނd!Ƕf]De%!k $ͭG/x9ίIZbh7GȩW^ b!Q{{Ӏ@/tG1xh(ޮ|=5ǴF3UKh0e]s0Z3PD2JFXDw<] PB;zqm65N$ynu8 ˳X-#>XPĒü"_<сUb4EiHտ%HJEJnUP\_$sࣇpy\8' ]R}trܜOE CLJ& @ISxݸd%J}?{#xD~ d;Ȭ t^rJ,g&yvR;@ ?/ ma(%>&e4(AK‰$n#ӁY2pv9xw>so.h>l8}kwbg{v*U3ĨI$`l"eΎɔa&U,"g!dR̫jp@ޡNȹF,Ғ@Bu}>Z9WtT90WY g?ށO./g#fcGr)qEY(T$jό?BȘ7*EH&bY`Nih/RwhyPW7/>.2\4ErמDs[-JvV{kcybs]J^V͗Ut\Hktѹ`R&z$#}Ʊgoq,3/B_6i[6|t/.af[kYZb@'RdX{Cf#g: XA)3G ߌ̠TVs!hD`h@zz)U"(r>9"Pypv\bX2,׎ CCM.𼼼[x 'ܭ }Mw.Nz$BJ\ 19`( [9(x$A|M&=oZQecF'aLac`dZù!H(6i۳bpOVHp[ADFQ]ğp.c "d! L̘%ELo#/y.1ku/8ZiE"a5R_m"J$]RL^fdIx9 t"UVTN#Q2c CG"*ei(,' \r 9gZ=^cT_ ?]_|^f F1}+pGóJ۵ErYo543]=!t.pi:Y_Iic_fbr}/ ;{lsˇY  uZ61RGJNJ篗PeqOr6:hTL{p<<ës?9sן˗~ _~"'Y c/0/w-V߲kia甘z1;#wP}4]ޭ\n@R6/i(,nڝt5)"U8Whؤ+ZeevT7*$pElMʎtv*m]k?l=EO2pn"z!^>HzՆkZqD++pe[UrL;qM1kpM10yFΜ7h~S癛:4:d%Z[=3%[;GĶpU{蒹d 'pНimW4kG5`0FJpt17! ǎK[In9i >ZdwMUl%_k`m޶pϮ991Xց.7[k]A(u65Жq*dX6L5>YS`U2mmru?YJxc2T*K,˒>d/2S(-U +B!J+m"7sl_z@~DǙlTl8<G$lBXa;GX@[cWRbutfe[*̚-כn KbSH`x,y`rHH ύ R RdQCo侂v3|mJ8,F&D=ZZtvEjyߚrro2OX؂{|D3{py/5CNXRRl:]!һznmzK ~evd%o[^ [cr{Kz(_9ywSE\ӌN܍ep[vIQhS*D֖(#C/t΄q$52X UvJ;8`]E'B9l':v \o\?i[v;x!haCeQ"BKw?64u)J:Ո\hAm hd7qGvUmdG֤5y'tdGwynu<ڎ@'RdaRs`AY,j)Σ `_̠]]J\8bדּ^J. A#W4 gÙ+9?M_\.#\+#c}yy{Fwm|T.I&jr]' a! J exBc&a @`D&EPkN_4at=- TifUS:at(IXa l,"@a)4"ѣؤ*`~mSR>ˈȰoFqu'CޚBf̒"PD#%KZp,y݋Az3kEcXM2;w:z%Ľ>)Il?gzH_)KOcq{;/5ئHc}#xd%))Rrly_dFFDh)3h 0FBH*ɸU$8\ASP pfs P\"%S/Eev5Z ?\?~S'W =?oMT-* . @yo+30B1Jo V7:a.M] `S5C2}M@ +0"a]JA)L&M B_g@0XUrth/-,&8lo-hsS`x h}v?/TBzʫztQɠӊrXT}ZU8ɕWF&w4{_O[kUՃW7ӫWٖp$| bwyU-˱]$*7j5,GjҬ&ƷtT iSXby a0`ŤŇ@Ou{L7JZlZ:IaT1إo&"s|a}`7 |GuaAp,Aߤo_/՛1Q_?=4tѣG5G4`꿿}ՔWMkQ5#9{=j nHâʘʭDכϯ%/`O:pZjM{p%q= 0,t32XokҠ}%Byb& 9$W7 I'q Ѫ}>zL^|/Hf-5'B ,E Y8FbȘŠIuO3za. >_Oha!N< i4J ib:jqj02fPiLwP礒e[ W2'B cV}}jv;H״WuKG_=\F-7)M*0EI%3ȖX?p2?/}5A/V$p{uJ$86gر,,$tyGBRLGNlL&e|?`' ﳑ=lK䣣G(&ΟѸ+P1(MϽ Ayy5!9@ |!O~5D;+UBI#OlXx2bDGCaOˆ^@LjÖH>HԔ1тi#2&"~fkNKσ5 |4,i~m#tkKs3Y O !`#qL`DYO"F.H^GNcF1Z @etlAŐ^Bǣ9gilr[<{}ӷu(f,dYRřoYVAgtrDJ,)"dR;JiSZYYj9S.xcPb1bf}.-B{c%Bi-΃?rsZZ'/j̞96OUf"35n_tFFld>YDvJ&g bxEv oWOoۻLxgO,=Uf osе/vn|yIJôE+o^k.otβm|,w9+|O3e-‘zҜwjۋl S~v?Jֳrl#LSv&fe`XI?{MrV0#*)dFR˔cDc3_T ig?cC-JaXJ#68R$#?g)Oc.*b]< *"(;6mǴĴip"{0X91^rζ$QK-9+R 3#]/te,,!;ZMЪGżxԄOgƣTiGD$8|@JlK;X \J`LRJ/mPqxK=K' 2{߹+eL{>MpZ&%+ү2 ĵp6>L_햀O|1ce%2{7ȧq~-R΃Q`} l͵ u{C2GmiMK[_}.ѝ<fEupsE*L< SDc(@b΂wDðq$ny4˷aŝe*n{i7[.V*ѩmCeJ09Crfl2 fsͲڀ#b1abc-UL Kʡv*ר{ܴg7p}#3i2.azf`~r$ֶ;IMo.BQ̗70Mw=/Sf/I@j_ *Z}Wʤ^& uI=͗!E{J+:5";Cl:4.w+6Y gƢ&݉i$7zSoՊP:>R0v2p"341N[A{7ϑ1EfHF-Ɠ;:ж@{]6ǮGU4S;eM'/N:뚥7];W#K-:X Va?-tQ9\e,0 .3X2tie$XLjAu>/*z_t98f ZGyJ"PHx3jSZ^1Tk%i]̏R񴵅rOvBWҽ5Hox`ia~Gi%85N u1J<"Ĺ51q@:OPDԣ dHjX(vGm`.0 rroDc9!.U¤r3Od%:yɊ>O"gu5TX sd hE3f5/#NW.4*,JH"I%bdF/cL![2fk~ɸEf[ew^No~v|e"ɗX> wCRE#&EJDB:`^eK ) Ӂi]I&[lyX'^'ʁVqA#G0 !ygK=r`AiLml_v00FHH:pFBb% H[XA`Ir0I%7k8!C!vP/X߁D^Y's}TyŴa5KgXÁ΍߻p*g*wb&XPK4A\"ͼ-[ŨRh,w4 П~>ބ.>KImof,A~M0o{hUO~VPքR`bmK4*5AR鄡j/?]~Q!=X7k3:oE^<x-z\IŘU]:yfT<ܴhwC]%n @ L٢C<ԹX4"G:X-;6p-ڨfbk:I1J%W"*jHg{]\gΒT'S_ w. 6xtEqG\ѬH]! ^(60^ԫRNx{{T/ ?J5.=+uJ}ѫӞy1+ < K K ܴ^&HA4 -w6Lm̲{Pб#>7f8hSS#- C1ݗVk9u{jz4fsBu#}qZS+RF P>>BdP 2C#A;&neTs|\@伹ʵ҄OY ;QK-9+R 3lc,M7cJ1xԏ+֦NXC2,TbZg1*H%QX|dZ=eU c8Μ]^t{Y; ?sGOgpS :.5tkhyqڹKWs!U/*oSL/هݣC>u [Bk"bKᅋ\˾7'opd%t tToS钊j>H4PVvwss Ҧ, jL6BHfR([[5 Zk6Ts|HX<܀^M`Nz ,(Hma=9B9]ioI+2;))C{n,ݘ;EIʶ'xųxX*%V2ʈ|BR%";W~~6@"g4`+$T΂澥85mA`/}ښ ;Y*#k+fiDxEeii, VsOT\JZFoA'݂$N.Eaggkzk:U3z:YᇓGftWHlBαt%( )D%hʽGNa|pѠD [mDxЗ cw]*/r:vW >l$\@m$} i{߫prFys?ceoJ|W o&jz-sxWpIGNx\}iw`$`)5Zj(90l-CNhHךD"":5?(Pʍ"VFudXD2rb؃Z شXw6 }I gRL옣aYVsZ/&xnl1='ބ~/&v>pə?E5¶>/~ꋀD#n J$k'i,u<HGEF1"0̓Oˆ^FcDaBXYL^jʈhA #L#2&"0l;[q㗧>a^ l^%KS7C =8闧R:7nױ?C(b'F&T9DCIg>ً M"z Gܨ;y-XhN(`3hЂ!p'%l8qy57+_zڦ(Iֲ0 :zFW bxU9Ρt1T:/K7PzK۱gJPr¾}Yͳ›+bpy&O;ufcfocA<7T\.KK<7dXlZOnuMeNi[[,|g9Ӵ;yJ8 ZBn[a3-uPP[Ǵcх۱{v,& []U^6(Q\:"[5\ ڐ c4R#4Sj"D !NHp\jrE{LI8F.qe)"Z¶n8z9@POTKG9Hp x2RΩM8:6YrQYW#כa37utF0Fa?)\*B ʤ :5s\Ϋg@0XU-Y{iat O0on?z%6ssΙυ>x7Vt#1[Kۻrn휓({<&%޻}o:{p]Kն!`*k_,Ah} /.u&=y 3]*A[ պ$j(b9,.U8 )MR g߽=T5 Xuzuwo?~|?/0۵ σ:@x?}Ӕ547i*Eӌ49j·hW&\s!t&>1q- J?d<~ywF.D|uL-@*Y0rFBS)U-KA(iFqc VpIJ4. 2"T4LuhX)+; ZDl ha[0 ^ذӭlvUzLR]a(> K:& 2mN`fUڎjSoexG}U R#=%*r&ɭ2wTpGڢRE}ɠugaU,j\7ooB\"/هK PÎKg8Yz?5Mra|ޣeA?Avssܜe_'dt;v}{Rn=2wբ]Y{,elf*0sVn.Rnl~, |纞3O#an; numhA=JUIa)߭dV0[<`.Y֭ ZһW,Z^ܘjWNJ7uj-iܱ;8ə}'mqIBϫU"V][rnd'%Lץ #}Z1Ľ!by6rPU]RMxj&hE[$šVev[՘RF2=kL4b,T۬S'@'-}U37CczƳMց򍡨Q{pR0v2p"341N[AGS6uVԘfDfc /)M9FX12e)c!,1F\4(29=,Vx Ma4D9]V `}s&FpE}N6t9j ~53o!$*8ʝ7\ 3f4&w0Q^1Tk%imb3)XjߓN@mR{cM7|@V6[w%T6/7[PK`qNRaW$'BYascTL-'v4>p[TA҃hBϷiK eGv)( Zm2F{N·e&#QHYu2Pk5f,`ZF XhFs+%az8_!g/՗2',ue cRHʱrpuRHcòiMtU}U]Wv_4GϾ /kj!fr6 B(tdqDNt2D2%S7 AP{I(2ew19xHInJZ \{1̆ڹXYDѤM)&wR쪛pž8T['eq=aq&9ŻU}&gmb;oS01].m_JeZy^nћd& NEAqa Ak.ڐXTU5&G:J`_% dJV3z]6gp&E͵t+pTmXm85z]Xme8`_JRQ0>Y)o+!woNo*>11!UT's"Z& ZFwO,c<{Ǟ2\e-P8|zzv9 KyWURʝ|&oo~8Kr!})O$OgZ=co3=coϟ-0=-]z敞izZGGstT1'm%)ģt68lMp_@YgF8hK잟3߳{XCD+-94UsNpQ %E!$` D6;s 1;ےq"W u^HTYiTe  q >8l V 4|NsʇiVx 9OwvHkn[îj-W=]¶hD-Ъ5>U5zփor>xSi49H]5ӣ[ٚx׋Vn}X~i-!3G?/Kxu}p˨5H@PRH%Bp 2q&\fX\z٩3LS fv!Ae;ZvklR2:ݿL}]M/7{)7[HЇl{/ 7~(yK,o˔pX8޼ upѢo`]m<A}/!-% Ϻ:ܱ`BBqeZn:}W SLA=ȤֆCy԰YqEn߂.4Ӳ<>i!^%k2ro^59GA\ 64-i(11SZ[|nqœ43ߝyt&/ f.nyH %aF-Zm^:.oFmwqǒ2/;g!*q.t4Nj[c)ɞq&x<*i5Xe6Uo'[y |=aTWnɳrTdq2 K7\iRȂvEˠ"E o]Wux6cYm|oZl澹b܁o{f7X~ޖC1=gFV>Z0X7Vy@as? J%&q W,T)@2yd˝ .W}p% FMA2Z[y0#& e- ZIDkd= ;W+b>m_cA,כkz1mz;zVp[=LS}JyH,8o]7nt(pڍ@7ma6Xjv='NZeacJ)ň&seP:T^/t5^D}@h|Z21n(e*R N%scik[EPܐD$zTEȟ|v/PylT)eDS~zO815m! L̘%ALoJX_;YӋqWsa iM +zNGoT:%6iœI =a2Jf3 H04.F2?%^{wP?+!*(kuzBi6e[ iߚvTH.wUn~؍#igV``F|0|8(d}Mɬ[rJ` θd-%Mď.3R'iYr@ R"+ ^7^м ç68;>TVP9/SGshht̵JmѾw˟k۷ՏRiAW+ryۏaf&se| Fg`>=Ӣ`޼6T ߝώ_fw F1},N8Gx5+At=]=!tNpi:Y],HhcfbrW}O5;{l}ˇY  {0-NB4 N󠨍C~-}Z5? ˃0<;RG~xO?ӧ~ f`B91ٓ' @' V?]K5ZZG j>b/ʚm!poåmHʏ_ο~7$],3p۝te5#U8Wh$'\v{G3Qƨ['U!V|,1K=ҍ L < PvE٪?&U8I *5c<{&μ DXs.G9YR'&" U6,u^ӊϟ{q{~R%ǴC)WړC:n3 zgd])zsNMbʯ&yƇUfg&e4(Ajᠤmd:0\fX~V=3&ޅlr╀O܇g n?v͂ios]3t~'!6|vpYH%s4Y1wrKNIX :>q -؜+ Z:|+\m8%N xrz]Yo#9+F=be~ qxԥԖ\`,+%٦l]v&E1ս+_ozG7ݦͲf96˱Yj9)U |,6m @yq*TZUwЙDjNxbӒ"v}ft؅ wm:RL&]^?{\0^~&غ{Қ[s4.=]ؒ`%q=5y4;絖C1[Թ{CA=/l;oё=f|eCmېmNi{}d~rkyls,8wduJ,ԈYH˒hΙJVc䔣|\]*7[1DF3M@ 20Ko &D}z$o2vA2vbu }lA[+fΠ<;Pi(V݌p t܀nҌ| @L@㊀ ֺ-L+;t,=օSJ Ucm)TWic"10@GcM9O8QQD1@FE;oKoqݣqCm||LW߾ c}*4EW> ە-1uoﯮ'ӂͭͿGFoҸ2mԭ[E2g o̞Z~85j}{C岶n iDZ@SAxmxmxmnxm-(---sk k k k 6ue(:11pURbDgBfg4MRzƥ0qJh'tF2s∷R. xś{)&;SUP6;n#3&g"fUy.dtGp (NYiJAj[W@I5"Nw C2Q%ˇPnhx5_Mz%uDI>h%M^\{_ssznNT}+T'r u`^xn72|tK>߿6})E>[ ̚.kr"iBAL#% F(J+8Jg3 )!7eo) ~.NP9oJw{D ['Q^|}yWRnM.w8c nE2`6<`nXMB{̂ՄB3 DoL=KݫƶS}zԖZ6. >Jm WdzaxcQiKr.Vh zſjEgtz1-Ez&f㑇>C{o]zבtv5/Z4<ŖGz&*dT[uma»dLdQlCJg.vBͺ :ik1s=4nwoQ) )Q^lbU Ιs@nB$ ]vGJ fIdO躍>jXj/HZ-uԻ96tt&tѽr*z/_,K_ QrxQr%+cG 76f(IƭLF`M)VOQZJum PT1` Z)ܠƟOiOY7hRq̥lCЊ)sT͞iPN5 *Kz\Mxf4"_or|7 8P=+g m͹9웵A9Bg'¤pv ,^>~@(w?~v<7g$a0bF`rv?ϮCM0Ң|Kd2r3h -MOF:FL PKd.d*{<\X/GGz .)Қ I*,KB`рLIa"ǖ]P ]{ X w}}=,ɽ߰uL{ y >?9p| LO=8PFZiF+ORSz9/ޠi~a`p-~Z>-5Zק3as_#8D+N1Lh[*J}YDy9Z" s⠒!r49g@* E!lplY[LxͅS^F55vYtsۥ-쫮L>q)G2ɴr$s4:)t>Eo7Yj.t=uE˥nzxdWc dFykR s n wQSV.+jFet(tGB. )k:fyt)S1&eTR5c׌QYtaD](he]h.<:ڊk{7;撃iXCoy%Och69!|*:!%> PF!jlaDb9A{.xQB+ۨ3{&*iЦƮ6ۏGq͸]mu\^״F5TU 6o$4^1/)Q D⩐ CGa4s5|T/%dFl(UJ05\G $"P(8h4PjXOIZ&iǔ-aG=sNE_Dy.g逜td%hzݽB!NduX#3I*9H"(h)##׳QYdZfґ2̅g&K K.kimYmC$LLQkeZf(UPٛ*%BHu),/,6SkAcBkZi*r3ՆJ&4%|q;h$sjwI|\؍suq2⼨SFiG灘@^?ޅ,Am\.})XIL@ɽ"NxpFXo%_?EOn)I, t^2jhTr`eҬ.J0FR\Q[\eƭxBTjVՆs`wLn;{'\*n!eqam9NHfH m svT4&c IU:s*fVUe2{U2-tD@28Ѫ2-˹ J8,Ն_W Ƿ,Xw^|mq_bM1bͲ絖Cq$AẋNE{:^T#s[{b[_6sUfY)@:Kjg5?ϙ<#YWEIpp,u|I£s eH*&L1XzvwT;RS4>-b߹tylbl(dNiy朳DFU`@XJP*"(m+3pQTB.fh3qDq@xcxgBȄzKq[Q[* @j_5Hڥ[R}̐_[o{^T_p]uyHj^e IӐd>ƨ8+GTçrQCw?VeTiyJ~kzyBr1}\eIii68q7) Yx0H"k{d9d_p-` J^Kvrf\NӅݜLg/N5gf Vm`dKF{^N{exw5킫aHc4nv8Vs 9@/)t3V%O߽;]\I~ J'哰S)jrYámYUBtḞ͝1-J#kO׳ׁ#QL. :|>!`<z=kS߼巚Qᮖt.pi:Y_,4fb'rm/ :[lsAv5Vƒ®ZKҰU(U_O`Em>o9;JP5ye /i9ǟ헟~?G.oD;_hf)R] zQyA4޴ZMS{˦GbwhɚChhX]mIaןoi(x7Kz\8d *zq+mUevW+G[U!V|(1ӕT&΄.7qOw(.7GwѦ8.$B)5c<{&μ DXCzN;v *Fk6,u^ӊϷ#:qXi#O2Nf<bր%^&yv; 7h݃pR癗:4<jk%ft}{,-']<UAR%_Q /go9꼂S4 !8ke|eu`\oL$rzNC/y`>ٵZЧUfN5QNy߀rgmT)=I{#%Shw9'^x faKp3Bb"XgaP EnK KBW8e)~_nf5-h;ǰraXRgѢo外 Kxz(RcspEcP$JRhs㹂c5lnR?O שkkEJlgvQ!Z4aPnק\\G$^0',^B; + F*X&ڵ|wcXһ^5nԓڰywp{'+5t\EUЃtI;5(>.Dbktċc{ /ҕi<ؘthQp]Z/:&h':3*2q7]XReHBPHٿk.TZ۬K'j}=}-fng CQa[ >!wY! cqLR,d(1h(4k$?Bݬe{S<n !5L,3Y(De^Wu{Bzݻ՞ p/qgca.ۂL]]r|R,h@8 T @ջ࿀'cC]f2T#/s;ポG au%ƑgA\FlutZLO6TPåy ⬼Z{GU> M754rƃ+R1d04ɭ߽;Oɗq*}ZE\{z0{^ME28[-5(6Q[5ֶy.[i\h%u٬@nZ0dd$=z 9V#O47&Je)eYً̔LB.Ĕ""D)9x%Uqܝ|3AD wˤԯC8#C"6W' _.ȶn=:: :2J{CGw^食6kekc&pP6^ l4gwɩ؃.fF(IƭF+}pRr&{ƕ3$r4V }!īq%~6( Z)[ߠni7Uh2E)@$b%@ʂ 'F4j)ddt.))iQ[Ex;1hi+"C+{ý83{oH"QZZ6?] ǍU P9YCGV!o@`ʻ& %Jm譢"d $ %bbATm8ГFȨN>֝5ꗎsE1M_k}`!餹ɆB%H4Bqqjq{I%%MKb uUQ[O,UQsͯq[zG9T~LtOOBZh2Ij.<54T#id}2c5fksEw/&P=`cΉ ,GsgJq! ȲR0u \gf:YkC'6LqպJnhc0}rrM統oM|a7ѽâNuҲ0<=<Ĵ\9m1@[n{GLqsv>541)2EN^7(=Nj{- tG?QO(\ɍNddVmA%\XLR#8 ?/ maȰ%>&2'A̅A$R0t`13W~VΎWCsy5zOڹ7glq ݥX Nm?"10yd#)M$Lf\b/ru& :f^U*{8$IX 9жԯgB 6b HB,P;%Qd27+_?qb~mWa,K'nQ.첂Ș7*EH&bYri(g\㤶M0:&mݖ>XvvW;L'[Gf)q#"e_E蘍td2Ns`LqeUbBɸBPāhz)U"(3hѱug#:?Sj<-=EZ< y <-g49;x-ws]滪rd^gɰuޕucٿR);n 8m\.ZUr EIest#j]r^Y.u!J)CZ 3MHI$=$Albʧ lYUv^.AO:69x.ebtųUyɏBΠ`x$,,pm8Wf;g^,LBx8 ",ӏ,QpOUv9O[m [Df%%S2Jz.HS5`<}e# H3b($TE nKCQYKc)qRv|siCm??kST)mV,f_y>|SX>ԿןN/fUV E3:ӂ|ϣ|V{d{h)SGt19.Y [D}͐Gz<@)!;g"l'ro6G<0^aUH`RY g!{`,E\[.I>\x},&>}O#2(BCttrt ҽ]A`~rvRF2V趱t/o.4OfZGpj}ڽݣ#Mxu‹l0F+8'zH=ګs#H FZ+Ք&L6$Xkc->2IdIRO$鷞mXżq>>uDxƠj$Tc;co!#yjDD 1#'Gu3c)h:M<4Σ߱$Uqm\iKjKjOI'.S_8C/s}<5Gb*DKXG)&(dH:<7e! tg &e`2HhUie:J)$}0$ST lj`#=Kwv|ڒv_fx=-Jc4֤8jNJpr|=؆/5_PdClbv]JsP%ʔnI9YV" q $#j(Q)xOF @ 0{$Fo$ ]g(qr9|vz<ɗt9*i%arbRIu(x±jW$)EĨ#}nS SEaxc̈cJ^8z&i] XB2;n@W6ykm9.%XH^zWC/b}K)Wu LjTSn̉7?[d25{}dʞDJ})O>||_CWjjsZ/n zJӁ^Ӂz9M96'V(NхŶCmwp|~c11>1'M˓<~l1x:\T sM CݫuB9uf 3]DD~$(n5]Mb ,";? IWj+X`!:"XKZ ;l(; clϮѐ֟zH6']* ߯*?>SQyONXP;-[N!+Jֹz[I ˘|kDحiz| q0RQ;툆F1 /@T{OC]goC羡_?裏L{F67P>u^C/~+B*wm_bٱp-.MOL0"M^JV]8;ӋYwrujqɅƛ.:˯b B,t;bGM`7쨉AZ#0W^!ݤk%$D r A]!?=;X@g,qt yȪ AP)x-zwrJ^kW DpկGݧw9+?vX u|f~q>-ѢŢҨݑ:{oυ^GkQO?LZ]%6g{kU;"ywtASO|#mC (}z饙߽-@C \vvq5K^CgGy<vۛcj]Z[6b9 ӺܢedWܰFj|p;sAfj;4[Ѣ7Mc06d^|D:B(*Rޭji8s'6ƟLy`RF ٕZL)pWRd*Pl *.T5H6ST:nx8|, V[.Qlxz]~<WG+k,DjKkr[v(Y\+l2QLBS1kؾ/bY#bVJ"-ŔEv +Չbc~hZE>e 4$6+aANWes)ڵ06@@8cq&XE"h.} IHqFiZBPs 9\LlVcrIRYW NY Cd2*Ivbґ!'aZ  oaKP3c`d$ :M:,؇Sºl{X4u󵾘e*ݶ Akq6Lmۺu4Bi`M:R*Dߦ)3^eT,zWs.jggkK6^Pp "ܣu!MI<fw>f(*Yr!dHL:d ?[lm>n@ͫ29j+K̑3#H-ju"ERwf:|H ~x8}› R!6oDԠy1ע)CZ*, [a0K5VJ$1*D䒳"acV@jU"=2WW٥$$Etȗ Ѹ hTh4"j[0EEu݉-X ~9xT~A(7x Qub`&[*јGA_TKi÷P',FZ o`j1֥U-g*){͖ A,Su0'CRf-a)5% ؆(ZfX@W*w\tt\+!5ʜ"(BO[՛\MQB1URI#bE6ΐLl;cm7XNMaHaNQXRҫaP2FnTzSi>ic2n=_z 4CT|(,|q`P|G Zg&k֮Eau.ڀ|6Ud&*YlY@޺ u&#%RI9WXӃjV `Hl o`::LǮlB*vNeU߃.'k[S$ ا ZN~u:EH {!B)96@׍;4,r`W켵 F x΄{`u-]k1.E͘V-)`#BT֡]PӠ܉D1!w fl'תMQt @æ(YUhKQ!Xe ͆)<"Y/lo 0kA@؅K҈H)d"ݎgxS'a0-Ӆ~a |H\tP_STݪěq+2ptGw VES ԏ/g/9VwFqf`%g ^YHkAYV`9cŠ<$J,x— W &UMDh) e@wۮ[4h0 6ogM*G!v,0 d!:xdWn6̠D$GTNVU-/16)Y6Ǯ\63hE` üOTE @LEH! cȡ( КKda 頚:a(XVrlA(եJ-NT2hA.A %t`- ŴM_XqPn=[BwDMJCx!m6]xN:.D/ F2@u(+4Fi*[&j0=;UB2Y\?NV{mQUWE*"eӋoW$2vB0/I6MCY03g47{Af|3wH+G߃$@fbB=&Hgro.2#He52 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@ϗ U)1TYx:Lb3a3wL2 2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@ϗ d9w*[O ~R[L ?~&H)2!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 Lgb@ 0c0\?&PV*RcLg(d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2z;kvyvPVف!8nwp:@)@`!q ĕx9K H\zĥC/X/~%5RvbUd=xړ4-00&:DK~Lx}ss94BwdEaux+{)onݢI_7AO kx Ҹֺ;cg8 ~b]=칳^w矅Wcא*Y n&~~qPVx.( S/Ec@wmn|5t;/;~5Ytw/lԒ'Z*r~A6+ ٲ.m&7_p }&>Y-~ilPGU3>^Y|QYN CmCe0n 9rXJTJb+/;UL:/Z[l82H+SArEZi;+RjH"9Aa6Z'+Cτ,c\ѯO?)2jc}Ч^Ŕ⳱s0_x#FT`U%wZW+xz#ѩnZOwO hUHKQ^fGSƔ#A3b6By-'!S: !er@^c^O\` zbqM+gЫ4 3r6 lLL{e+!P].59s4kl6H(N ǧLer@D{Zt16T.6+3?>:Ձު.ܥr=\Oa2lS98_i9.rxyo꥖/ޫ:v{x,z7RUoj>+?F5?.vf糕M>GQ4.ˡAr\:r[hڋ=Nq1 w\J ̺B߇oZ(xSΛ_P>;-ΚryfNnAgC=:r3]ZZ +|x٫.8BOaJIL:3PiGM309=sކetf,N2.TR͌/; oZ4&8O@f~BY•_/?}d2ð?HWU|𻛜/nlֆWcde)|]\yRϯ?[އY""Q.wمS|ܽQo=BmQ\ ނྺ oGS=?(P֯-kFvw;Y(0*5="}@SUO&hS,wJ(:Rq} B_蒣>[@fZ*!s'"TSe8qUI<>>Y"c6xm̍y%t f%9Nʕ~.l1lMg!]z 7|.{fBnv ^Wе.Btr޼3y_;'}[˝3B2 0" %Ip OjTƂ) IU:3U̴Ffgpkko`_A.=ѣَ7`Qn,Yjn= k ?dr$k8aF*eS{#"NJ]A)89\?ŒDrD6AXT&RkUqg7w&ΞȱjL.FަXg,nth]/= \GX_^>|{{Y䙛Q _p4˄^إg{ U0O"]5r%^t)1O%|ޮy{{НBlG2ޛufoV3mݿm{˻ ڣ7C˛?ݾl>:Hc_w<.բ{ t> [Ѩi4+d]?ĹDQ+P62!&ȡPh~91F8B/ Pۿkg"Aa-7qq7ܱH= >XR.qVC-QVMN:H-O:VOF bdGNt28 KbiB> &(5OMy!ڌw}Yը`~)M:6Mapx]]|kݸbf]e^qvK5ǝ7A|:`u1#-eurE;~Y h"N;7dY*7+u5aPNш[a(Gnedҧkmi^/KcTG|j`FIR-.ܧ {gMrەE`8mZoѹ4𦚬XkT X zc5&mLamK//>=^ٻM4 VF:oZ7Af&FϑΏ:Ot:)fKhgr9BRx4;7:uw?|ׯ7?{ww'}ϯ z`8 "PJᏃ`nUoWE[UC}{T-XuZf7{{@аwrr iW}obOlOW:MzŽ aVD-Տ7.bmB ;-1PuMku]^d]&`:zD~ "km%SnDAR 78VߺֲbC߮>Fw>Kl"}҂6*b"AlTS o վtZA^e,\7iB0cm}=rC4L ?G@:߆N |4ؼ ?6D4Eu Gכ\Bvo)j{WQeЯ`WmlHa"-IwjV`w-'LI.WJP+粬4cbH*9<;{=;@IgqfZD rt(zJ2$3품I9|2n: mt@4 [i˵‚H`bJ aS$Fm;P[^^/d^z'23~1 Ddqǹ'ac&9g{홑1)\n_p;fJ6j9r^SH`7srE:B oz0 AfIk"ԛFqRdkm#WE^1/E ؇9=;}Q,$;qo[eŲl$Kݬ_nGaLO@Z"i}y*x,_RʭP#ߺq.3)UiiaYވD{+W9hAb>KQªOJ:v@:6I]Q]{Ӆ!fm[ (,z'`@зU64Jr qhanCl'tנU2+Pj6TJ ً̔LB&LIl(%DXuW`g^=YV[~עH 7]NZS1-&0\=/yO=7.=;;hΎZl u׺3ŕQxPz߀DaiMyPߴoyM;QI#4V9TK. Oef"`vYCUh6rĀ$"W̽*:#8PځV 2E&#(b˯M{HKg%acg`teվ-<)~ ~b =U _GyֳBZ"xHWEy(qNڟOH2ɯtneG1C'Xve v*cϿZO/'yK>~yKXiFzJ$]&ҥg*|G>-3;ÓwsGڜ:PV&@\ t> K׏u汼[ӹzZZl0;~c>.a~|qŋZ\3tO2j_vu7Ʈ:nۿl\/'7^LWRfNWJK]0:_AJL]R|G[j3{bct44.`h){6::mOFh|vV֋Ωp|hKX4}s`Z~,&GYȇɷ67\nRX6t6m1}FC|4LŸ7.>& zP}Зq_=BEmZukU%,כn KbSJX< m4F E$VNxns?YyWSaZ!ZZ4bsl<ί|DO&zy8z^3^1{ܟbXֽѫ=+V/fah݅i͔~Uc)K&֦CYES YiqE~w*QPne G]3tkB׺1%=eZĵy0Sv,|Xvqoh~wӰMm7]Ljb;)?<_dy@QMņј8_jLieynX7l/S*--Eg_۲[yxv 7P^)o {o@/zB:ltYy0I3aŭz- G[=nggq2YФزD<8WEtXߤ |8)hR(IA?^A{ 2V#ʼ0R@f6(CPx"$5 (JVS}赲exT"}7n*jڧLE::)kҞRRi4Nv4.vk]vn?*p(#E# xg3 RW][C4' }o-u@=ꬼ'Mlןu<\JZe =rR@prEf1Z̑읕(6~332|Gex6|^ptZ fr6 BV@$ee&#KnZ` f皔"ېTF-+^~ɉCOzcTdw^Eo PFm-q6h:Ii MPڊׇ⇨6OCC&׭@199kCyQD^rcQ{G2]L5KnhUTE%R֛d& , ŅuCvVUEF2?s$ Ec R(M֋RĜ5qeXMG{ ͌-PXh{,y(Ml qdw'pn0~4n4 _L #v̊&'vʀOLgd2TT's""Ƣo(7y \% bi66Aץ]&f69̂ʈ]M͈qɸ<Ԯ6;Qgl6)!9Ȃe(_@i2ϔ2%b\pAllVNTì4ن+bHR 2ѽD,E*mD 󤱑2q2Vg3F"Ƕ+#"Gćf<-䌔4HK<,ayM,Bdu$Ǻ3 mHnp \pGN0P_F:!j&fD\:jd[\qQACqԮ(ņ<.wEݍKGX=D="ʫSx~GKxzq;^*EZKHXK Rњs%xYCN=\Mk/" 2 $'/k4goɪ%}=%mLxJĈ(}dM+kg[%Q *c}~~|IF +Xd^߀m-OnyqP|^n˗q 3 %@ʂRȂvct.{4{孢sc5.Ѿi{lYrw41wyccawfߐFLǣlv|7AreYU0tTNPBe!7mB}q {CuTX;Eկt i{.i0b R28˓6BL9'2"XƵ֢9U-Yb{-7q[&@7Bl^ߺ u\#f99(u5m=bޞ VH=oEJ zސ*Θ9* },pU{pUԺ7WVIs6 \qѰ"ܞWEJ{zpMJ{` OUquW_"Ӹ8"*+s4pUXHH՛+8u1 R?wSi> ~Z7>bqL}Pgӻd)$R\P}պu;#w%euUCyj:Ur Jb'&1>a;PU,u?%fY5ߢ5v`lf$guCwnJQ cQjTܝH7:B1TH;'9 |Dlb,}2 ୍BF&&mMp +j4z߀ДFӚpjϩ1}̗pmrVw;Ԋ w LaILWWK ky]9lߑh+VLV݀&H V[zҺ#zHݧEn3OFh|vVխw]jn d)}iPۛEl ^FAba@eHYrL y#P&ief_tf}7貭7x1}FCp3)YŸ)Q>2ޡޓdW|m`h2,zәIc{/5ձeE Jq&82Gb% zdM-%|QO6c2j`yFpi21Cwf55hyE'˞..+fOG7,B[ '&ep^0MT|]0ҫ E&Iv6fq`o#i4 =(|<>q.ӭ&zMDq$eT^x- b;ZyHƔ';Gk jIq=z8lTn=ƓECbef3p n8k٠ff^o DŽ툷1G5貚"S{f8nfVUZ.^\;PU'#ގ'F_]>fv8N1gь[3e+BWQzkQjIqA@m냝1GgКloUk:ȪBCnj-3:MXm{^Qz} K!GC)40^ʫgEh#u&R-/g0E#Bzxܺ1J 0DNL۠ƒJPaw$:#$RRq{ y$[+ϡ^F\0/󕫦|кT>}c/hKmL{`1Z8hbbTQ!ќVN[ɒ2 $zDx;!F{{&nB[,!y`skZ8+޾y$FJvg EqnTeKMBW}=S|  "hai)<1 P,9uo<<摞B⩍0:z.|BI <XC#w2bDN uăCx }nƸgP=ǯ]&=@_VlG֗AV[ktm˺WIShq'F&dep<.@-#|߻e(.@u<}Qֳ)>6aOzL;D &-IWXO:wCw}ծ7s Q(2pɨ҇|J ƼO0"9Dx Qu۝2hIG%E2>$h-D<ҍb\pJ'ShuwLjʀFw;ɧh}  O0j$`t4\VV&qkDVZܘ VTsn9}}+o TgF3 A ñz)AdA)gZDH>ϓh4%!\^d 6X[J؄Lk89z-QxagqG^IǼp9ߋl,c7CypeQ7PAsMsiL&JhH kYЄ)]J4N96<nC,A9Yf\*/I{DD02Wc89;%r_vgر'\ڽfqZ@ -XPSa3#3!QLY-!4P6)e$$چN \ )*Ik,zq%A!tA#!NTu( 1ZH!u_jOwqfZӛhMlL:g1(S=H>"p&z ,DH_Aјn,gО{UN9vڈ S7Rxʯ\ Zݛ!CWmV6euUaʀjvyu{k]ϼ4r=>L7F{[Vȳ|Ƴ,K]zlXlBχzAՕ;-:߽ܗy:F&܍<'fW[o?mgs|N_ gQ'Rwy(XLaԅW2D u.Xv۩K}sՖ$OEJ_{Ov?ڝK;ؙo_֝\ Ʋ4xZk с h$c"H JRSA.4JQ4)c(8Nj""_d(.(CS0$t}8[xG^b^2r^Ad l۔̐KQ펳rrF bm-w2ȴKHFcLp!MJa^1n&n D_uPYn0T49q]gEW˱_t{7x$!n=\|*9e)|]?*ᡚ'6}K6>M|[A3-_2IIM΁'$Xv%pƂc,0Dꍤ6y^Q=8,8ži Ou I%mHK5dDJ!XO:UpyT9%G,xB'~seΚp8BڸɺOq^9)0 DP+JL2όD}VqMJ4)BTuwePװIB8԰I@eѨd3⩣FksNA;Bm]Y9ɅHͿkqGZE9"`M {}LL\Vjj!ث>`;FaBS4.Nh)#E ד ͬث:;J{ẒVHI1T)(T8(ϙȱf@uA)P ưrx^ D D]\!z+}mMaYno}? "3p Ѻnz[UGR1WEL|dqxR.:JC"Q{V~2' X g_٧쳯} 2ҙr$*D6^DT@)1Is$%i\Z0"*ڭɀ9d> څhp`2Qf$y:鉶6ӱ8[FLxr((ǓEǮhlYC׋|PT!u>=\Z$%H #sHQy*B)Qd *12$){CErY#TcEhښ"u b6{^rDn|ܛ;g3Գ>-nqW_Wf]]6E]hݮ-G,eS8oqbfR1@'!xm*4xU}.+~3^yg[1K?;\`mvH~ɽegx/p!r΋Jue"]TIEJ^gabWe,:MWn*PW Y]o{(zݖk[7y-'2⽅7@qX]=RU堞KU|{El2*k{ fճp ;~8fUAb|6Yz0'5__ +WsdUW®2;B( =z SUX'î2?vzU]@veQKqp\ M*UDڬG~~oʑ󿕓hL([<"+2(bDMWm4 BRpC CMT\yi@񇣭7J|{uy:F-Qf PQ&a?K޵q$2K6،ԗ?xl@8îabLH~gH9(,[awuwUUլ?o=pU}3JToYD9*9q1b2dwqVB4-H![|X Yкnm,ǥƨ^m/W!!S\]ɱHR gD.~ǜl7<ǩ6(alkZ8A]x/ߵj*꺁tu~϶Y+خ֪scj#5oEOԂ2bv7zŪuD6r̝e)9HDܐJ$<PeG7VɥD"$abtt犅}7B qwۭZ) 믒 JPwd%pJ^kfE4 (;XJM!rR b2L[ mtEھf*ݠv:wjb\8).J *ώAYc]wmYۯοo~`lsۍjak؁Uf^?s%Jhѫ8}L ~ӹ؄ :=U HbW- ڨZGxdz*m}G|}Z>ߞz ZuL܏T@jRIgT+6Ы {8}+.:5^A,ܬD_pp nncFo cj-WA*!\oCBTDv`Phj^Rr7{iGP.{hhƻ޳.{y^:|EM7T='$qd2JSV}zTjqzjĚK/F3CG W"uDžQ G%+qt驡Z'W4y:p}l~*pW \1A\9Z髸on6\6N}.B|D~Bb<_ 8C\;d Broo1nMy9 S; niYvP!ԁ7ǣ8K!bMg^50-8}68E98ڑ gw~( $FPsſzg5[,2'"o U^ £c]AbYNŁewKs6]ˏ9O`.%T,L{K.SoW܃XrHe U&WWHuf*Si~oPSYb@ߟ4ٙ+r}\SPs8_l6>C;y&hV2nJkRVi"N0Kfv<G;9kgTFd EKI+9/ !r6s hOAs90*[r)OA)M$' YQJVN5Y?`B!\S  ZQ}W$nx凋/*&$GJM.!Z^:l))a  [&mimފYm=ĈHMH9%48As%1+{GýLKB)ֽ&m2qC 4d hJkP3~L&rT@ dZwUYT1ը*F60<7>(/j%>Z|v_>Sx"'=QP3 N*$3\@҆% d(Ex_׌tyۯ(q*6|Mku뷇we X0;o69Y+.[y17_bx8j/kw'ciݴ.scK z!Ȗ~rU}7 FBYܻb+[ʻX ydzXg[GBko$W؇߽Yk}~48j/K5˴oqK4\Tȫo-JQ[)>yo1Ps.}g+=rUQs9W;{%˼+[IZuʋϓ J2XnGbwYefͳ-eko2Q[CcM6<^P3"mpjP<(`) k2Jxޣ%@q( 3"Fw\".C\p퓯7.yϸxŻ824: dNGYv`SJlͤP\. |3<-"5iy)2cԳ m`PcdNV dMs͢:*BR&@} qLe}]T٫yu:>~Rg? SΖ2 q?ŷǝ.ʼ|O8wsRUqfGW},V~kv1rPVݓ?pahxS0$kOhdaPNфSsN~>ϏДͳZh 70<= w/#P8.'WR^En6owo3QuRf1-M* D7e{>iBQ 8hͅ$p<Nse5.<.⟗OȚWjTP\)y_.FrV?zrv|pV̖#bsi0JznۖG//gUFSd9]= λaP:i3+G p,]|<~O`vh~ni|]v:w4?ĹUi,ƩȰqLh1(3FB!Jq3dt/?埯^?^ORߞ_^,2ZgYߊ߶"-ϧ/C_]c󮹆55f>mAaoeb˭7O/Fs'8]h•z0 b~&*~**PAfoAt#AuU WesVLEYi]&~m߯H;`RBskFJbm QIu7cq'HzhoJ|a{~_DK5SO5-N^7G!̎c$k g| 6BI/|NYZ!r"!r<d2!@KS+E$*q}Pq*u!Fwj֣tO'i}]ľ 얮]㊼rhd *uSb\*[_ĭ;n?mZos<.y4e-һwix|x{;|f}w7/>.:SA{9|Kvhkzj.t׃l66m T}6應͝NprWtO\o+"E F1+%L``mƴ FOǎ]-sJR(hI R{ 蝎IyZJZRŶ|$1 _P|a![=Ja gq϶cF}6&.lTHyQ4ۍ"dc%́xBhU TS";"ЌWl`>徝UV%Ša9@ˁR2Z.7]Q5&чg3Kv΍ܫA_?j_?W)<|v˿O?ez3+다3I?2)՝2f%sBd[F2 8˕__-q/5A u_Lo{,tb3/A^_|oF< an٥ӿhYwk-`wGR}x:s Fl/fVcThivk;KQ&_+W9{wڻ1B9yF8>pmFeʢu jq3>iSg .N`V!P(68e\m"-?mHK*ѵU۾bd!HI$$ȢybD'8plr M#)H! QZC$x'L_vrɗ7O)FjGܮ"ԑzJ_FA a?/p\VW1MoV1ME8xݳ7-6%yq1i\x.N9{o[L3Z(0Tx5_ɱMw=( @WDo;(Rr ,Ee] :xV XGM~xc8jT:@A*#3x!eJbf]8ngdL"#l(@&XZKKWb6oF'k4Ԫ}1Q[[ _JREؤUuV #ZdU 4oolbx7}DCTppjq镹']s挮' ~.9:9@'VQ,Y]y./6AdRgl4]B94ОGJ{G.p@-+c)Q@PT0Q##7#"T1Pe9(I2FCis2PtE&-BLhXLY\J:]Cfg~'`YtyAs{s*Vx:E)ׄpWt8? - x@xڧ]~ߡ8_{ߑ5?BlZx) Z]g.eﻬ:G@; Z}Z <=sof.WKLp4NF@od[ƇA>>ޘ߿v幻 8Okx f'6#O=buJkU~^Ԧ۴-j0w|`7Hovx*%֣'&(#zeSYPl%I$!Z0ja7u 䡁gNS&ַe8,yyZ[[oZb;oz*- 6<˚19>`c+,se9 Ad=X:9a FFD'5*bکS64i ݅l4۶qզw7/wL^W`aW_J`w#:ݳLD53e-<۳[7.'](ڳUy=4/[3@;١в3nǚ2ɼN/i]qR/aDAmϯv=f "*1g zmBPQKTԵ\vjW,=+`Mw~)k"M^oy];7P=voxt{o"lzZ٥nV }]llN;mB&XIS:EM+ʪR@秪pX|q69S (kw.(IʌHlӥ0&*B:c ^Եxq?3~W8W혿_*t\#fݻ,avw ͈7To N*no"K˓[R*1[|bGo.ppgog")egK¿j5d"vͱ]:g~72%V+28Ɂh4[\@Y>f<"| ODDHAF"LfH)JcH"/b \ fre^ܰpПLJP;%spKʥբP*Q9lQquF%(>*]-LXYBe&H<.YLދ[h%vCI_ehԣ<0bWߏ%<74Iz򽯷ڕOb}}N^A:rٮ>;uP5۩V^y{NkX/U *fLJQ윎!0!ڦ_.A8 * ĤC)RP֠˯_q=61hzHQEi@A(COBJYRiu0`锔Vmj3*/ۉ2Lt%Sه8=vXv38=o]im0"V$F*~uqxf^Yh@!$JX,XDZa YFAeM] Id,d>/@bE9&6g?EluGӷҏzD##qD.xzpy7hHFE3z$DZJ8Z=$yñ1Yed("5zJ3 S95\Y+5xirm]98;8#~ՉtA]tLkʹ~Q7j_ܤR Vd heũb.`]J#YQ0C>fq,P?mq s7 ~E=Ѝ(d%kk\sy ӨfDvt)[ SЙcv(l\ >q)S/9P(h=Zov?"O]RaXWY|gJ{8_i0iE1(y`^ N!*1H=<["Ejb"vVuթsk97ݕ=O\dcvc琕Y)`y13&8™tԧ#־ /; 0F"rvήSwfpkg Y|6W  ^RK/86W\sW*0$1m33B_U!3)~S_BۻrV R3U*(r3!\(扛tftQnO٣:i?KuM|(|?)?\&f+H,ŭ~fڶT/vi( ?O UK~{JҬ$Wt[U ,SYYrgև'hԊi/&k*,*YUjU]5/ul9tG.e}?0f 6Ά yT@cYo0} tǿ]xÇOǏW{q-0 .U1z "_a틦YT-fZ ES\Qa^n|-}??.\|;?ii1kGK+g~0T7\EaߟXazXw QSڈqPH/Z1W^=fIƪȾ ' Ԝ2F;Q2&ze1̤: ܍xK'9I^ma^^ 0G a4TQ0#xT ! VLGM%b58 7CԩeSQqj/<Ʌj%C^LH?@ ^՝_:%ڟl?Mqn.Yoj8%7ih 5/V8d^!"|tђ :HʚAI҅" #-JGIaGA\S-1Vz_qTQ:2,"9H6A[R.k%^.)u_˜) 5I׫+eͧׄSC{ߴႫ&;s$#ФvⅤE~w|סR"rTE&gyT":'Z#D6Hzm/eK,DIRjZGtt֌akr/%x$2t<ŎGݤ|Xٸhuh6u6/GG1D1%?6q70Ŕq,lO>&͘$A`1ʑi XjI < /"XAS7~5D;+U^S$ M"z G\vh5Z:E( F` *U|v#6s6< p8UGZ;N{$7ڧTgy$R=GNMj/sY$8׎\glZuc9vcS=Gco5Y"F̬OEI"G_PZZyg^mk{]E 솬Yxӊ\ִ}a$">g5>3U/9.R볇7d:OAg׷f!!LWzl5ﳜ7g^"祔Cr=]6}~DgKeZkOjټkΪ?KEBqC~w?Tluq|s h.+2kREQ̍8^sc"%!X1SMisq<7*E 圦i<8γts2(LeP^ -&PzK _D׻Ƀa䐤2] #52 &6B;hNxM׽S9[- =PEuB k.3T8K;5Zc;A2RZfh$.Ls ҍuT=k|pp3c-Q$IAe,b-CH[#Ǜ‹oVXP( pTK:XEL!0I)wGS&`LHFJn5hTWS>X;徭UVŠn9Hˁ8iΗMԘW5&( @g_6r 7]KI޿>`#4E>9G3X,Qx}Y~zza8L`\ DTJf/Y1!>2gjٻJʅ5IC\V2żS/x?F7w3qx6sL6hE ulE"ً߬=ͦ;`< ( ly?;}:.92sSF/U]2|/=,!g0#\\|62ӯ$5cZ(ڜ-rm| Dc;t a5kmJ;Gl[ّzIĶDk"_EۼQ:j1F: lp6HFj;^sXe+]]chSj5uAe=Tۚ V4V!6 oc:7v2Y"}eدY;zIF:~R~Yl*"Yȷ~RaFT ˣA4)kqQE&, s8цZxakִW ^ }RoՊP:>R0v2p"341N[Aa@K=|&wDJaUbL*G+D*4T]//letˮrZ]N7ao<_#,C D@+p2M5r D.?C %_\o U0 3[`:Y|V/:W|鍆p/st8aZs y1y PHBQ&AzS +8E- {KQ7QN(}ɣ3:yHds+S"Wxx'{MXeϦ& YR?>Vu%!oےHH47Hgga?wt0Q8~nf=HӠAQY;<<Ķ>0F򁴾1H?\uMZ0-jt.&CM0cp%2o`>ƨ$2<' c`sI%]/0ЃqXahRM\! ֒#ӆit҆ 'Bk4-+PBcP0 ef7mq.!:&gW Ȑ1hAXcC1xթ;miCBa%JSћg+w12'1L($`uԲ!=*u2Dȉ6ZHp0 AQHp1F8eiu&Ts|w¼ yW#t``$/~5=nRAWxws3{wgzs\jȑVNeƈܻHfi<G` ɲrŽT+)Gn: ?>^g"j.`Kk^VyN0W37^eI J>/#+|=s]D崽?+dzWN.dz(lU 5u4K].q8&'70* wOҽxNL˭_'o/gg/-Hn}ӠzV-7cI?_f'><} `<Ԓ- fXc3[̲|D0 G0b^Gu=ӛ6ٚQad}u'Zm+AfNsm$7,Q}:SR/czD{*ܯ PB?B|tG'}ۓO(3'oq . G "N"L 梭inMWmz>[[ݕn 7 v`Kni@|0؟WΠTijnVMpew=XF3X( *JzG*,X>1Ӆi{rYӞp~!Y(Y/GUӽIL )f)EMދ|h(2HJĹJ(b7=cq/S$=nC|Ў#xF (gyrB0N (QC:jM$R,J46׽iAhlM'<2G.v>!tzWuYd=zK!R[A衋[` 3uQٴM|K!Z[{?}6a պD+G1br@[…3\b&۝Ѱ8=Y SAT #d^{ \)=(_i~ڃk-7ZN@$3t @)pAIJrJд]<20185>Kvl+qw`Ǔh-ieGf}S嚸,){,/Omlf"GBu H60R(OE9vc9j aDSq*RR)QcNJO[0 q*&RkUS8%3xC-@r1ehu']ޢ楒!OoyHc\7 O7T<]nvxxN\0[fVWee2vvSw(Cȭ͟)6(Nb^q-!Q g--¹$ 5"P:֘6rz8dyZ *uEk%i!|s LԊ[ϴ5Rs!eݗ<$%h*rrm(` 4pzb, 2KlWǣA?\(yD &ZTp\NKD;=FC43#c$ kwc3 ߇ h(ħ9Nf=A+r2 8 !$;S@w?$*qRl$2χg0m0&WC$Z>R-p$q[ML``$g@{ Tb3ڋۊPeZ0 i;PQbe۬g;n.;ųʚ;k2m3JNRzL X[.n61;Ǎ4ᣱ&Aor4xk"Dc!1:sFPR[4nW-mN3X{FXxL 7ry+B׻k o.MS_o˰4.˾u=oOgeSჺ}s5L}s|ߪy@岴Vli=bPHk~$I3L ûI3WYع)5 Vț27xp(p1\$dlmt% &.WO8n=n vR-%l"X^fTp9o8 Ku^0'E KN\:<:* ɚV >jC4g=~0J=C`ؘ}q,]}Ĉ-=vQw¾)_a)"g4LSb ,5"&!Ra1D+R Sa]y"f֨0wYWC%Z'as0$q 'dSV2Q2DlM- b嗿ErKq1XLMu*pcurݟD1^}XOJ:^] h1cizI@w:RoD 7<$EД[L!qQ [׈G.͗IQ:,O;g44|w;" ٻ6%W=fcE9AɾmURL ID>CR.jJ4l3fMOUuq)&㋛ĒDwIY,0+' ~tSyBh>ʴ{A qqR!pV0̲H:Np̰lG^f2줾eVXoKdvSPuD=J)mٝ{elNd%} ?ݘ5py0n2DL'6GewG쮋ʹU\+6x4,̘l$6@5}njO:g A;&ilk^0tmԒkoדјf,)6l5ѼSW\PG 5NV|+ǫ7^goσ9Z0cev,:eL)0 ^0v! \J>NO'\SsJν oxvzq>8|2 c8:I2`Z;Prx^ $UzU@p`ȧ8M./~K =TYxV'P!\E`۩^XuPJ%~~OB\K;DP6nksL/5Ӳi[T4~VdNhM 6^zd!(NKzU]37DHP%ZZJ"DDg9lr/!\2”?r>LR!h K^@@B,3,D*I@!l,y-&' = &Lc+/ךPp-r j:cI-M:Wr QO={r 1.K)O#_`:IK\K笽CɵcRJI)\Y([\ ya]4!;4n 2:G:ϊ[2a ~z]>:E8%L U*հglzXh;,yuhKN7[7ddq j@1e5JX 唋)d2\#ԝVޙdlӴUK<_7 M.&{=vK7(qm\Ѣ}\ܲo&Ss=tI/xhI[xd4|NЬ2>u UZ Q) GDZVo[reQ{ڞtL.#}Fa;'! KX(=\O/](m>E6SO("_/uwkNb߂ ~oR׸7dvֳ5M_h0'?}Mˑ+OK=* \zčbF18_3{@pEWD.Z{(pE*TWD֦7WJ1lU!CB{*T.X\ <$튶簮jhWDаpETBWofFG)S ~4e~}p,{Ë~[ڭ}>+G/Ϧu~tx|>_+3^)b[j2A>7+ jwd%*AY  .  \r:"j}B\2zi^E0rxaz_G|!7b<LW۾zc Z pUpETr:zp%d)5tnI[v䌕6xrpzM||'䉄g?~7NC#f^3D+;<]yf\H{[-t!XۃBeDlaP):-4.!U!ઐkաQ bPo\= \!F3{ʫK/Wpx[u:a=tu/{S4:ߦ(wNW= G4n\tgyV!;=j0XcA|'Wy]v4;}%;}MО]B!ɤP"F,`יg >jfIP7Jr tKe)ڌ\e QcBk+ rjOpBuf2Ki7c̋@OW4cZﳃh6 ߲d> v)RX#SX`iW]qB@]d_T(s 0 ў5H׳QKnfBz)Zk.kkm[Mj@fȂ)j-4i$}SV‘iM ?i g%azP̖1ƿ>}<:lݾ?uzp|<#Yt+_:Y2j!.RK\;__5յ T]8Uq߹-ˠ9=--M27Ѡ22W΀|gd͖[Q52v/LaNKWK,6d"33 ˙C+uLA Q)PhSn|LQ[@rGE"ID3ef+ٰȻ\]}f tE9aC+tk/!6rr6P)9mm??c5\d,Xg*&0g %>pWK^娳@P1,Jsem $+3fϤ-sKH$ʁW x4N?ͽ#Z6k, l]3l|zn\Ix>IpEZ=u"#2oE[EH>LGs0^3ՖLܯ^vByͧ]f~3/\[>j3odig[9|3N_݄ybXcχM%h&=9mwvƟ/d (MlddJjv5ZDTb7$^tǢW>Hpѹ =iV1a@e,w,};ue@LoSK\[y)F`,B;,33,aV)3cG$6骞@F5w)q!"Դp؀gXR@#KrDLYbLNºN* Rї`h#1 6KF@:4D D#QjO4TpsoRV+{Ս2s{ 5_xu|BCw-/Ǵ_ڴ/qޯ?;sHowmH_msn],pwAYHrgq,ےؔ=r7ESOn۹nv`⢍NAiPCchϫezZ6owc&8㒉svpl\0i]Lg8MwoAѬ"FTb$HzyMq}w Iewm0Gex6w2c_g$ }j7?< Җh_ۛ_]}- ~}8;=?\(zh'a{&~4s`,+6?/HNkԕ̋ݢ}~]{㇋Iw*1[մ8 <V]ƿ\ߐ>y-)M͈t~kY,ТD#XFHr1ӫ6Ǖت`wxMnzV-[F\ҰI*U_Lqp6#wە}G? ۃ4ܿij5M-Nݣi%j>u7h˚ˇHòk{)r$/Ïi:˗oNGЖs+Ic+y2eUo7zI<\t -8@L8C9qm{ _#svlKܨ] ЪY 5#M,>; xZ{ڜҼ:Ya#QgH)c7'Rw|D;xE(dp1nX7^gDZ~f_Ի0}uW!!HG 1#g*AR s !#DBtq#UkKe,-nܿJtv1#54t$`K@ߦAnQ{Sozw٧FSn[iJ^[\RޟK~ 5]hӳb^uRҟѤC}[ί6o[y΍¶gZĴHSf7WZMWL9M7KhBȺBJ dnMLdzV5ȒIKD6xUSgAH /rbCP%f-iH2zi}TD1MNȜ2ZAAjugb#l4L{j^#I+KLB)#(F4 A鷢ܦ J٣gǽ9nf 12ZECQ *Řg*0ԛN3mrH/^9 g3P/h!z]){u/}q.3-*,˒֍@[NJ'sr&<9$bXULjZklF+3^ǬkufPLہHvFTh}<5_mD ss CDtFi4ZEWD9%U|?E6+/R#&F(2Q2- h3- A\E-;;|lgV2Q *vw[{֫~Tl2L5v0>gJU8)s~`,rF;ƉVWALp>ƒO5{G[[29$,m6Dt(4,$q+K0ƾXaܐ+0x>[۬7boڢO]xӧKh\$+"LtO-ЌQ[Z$D+$k="\q =0<0(*jU¥,X2 ZtKSD3r|>*e(g ƬqyI9,Z[)'U_mc,v^}G PжN}) .߿R / +t L;q R",,edz9l9>d8iTk3үcR#dQpn N/3<ybz'djpdiɢf)B|4B-sP*Wf`^%67hFTN4#4"/{gW.w+ Erɟ)'3©i]R*T3ji"AyE$[PA~^GAݔ8ρٚg{.ݗf*KzA(+#2Gs. EwLg(o?vHNW7lJ7~+LC1KqVrLJU|\wz[vx\/RpFazVZ!:u֊kY7jƫs21 -% EFC"*nvEe"|ׅ{WG ]=Kߴ2n0uwph`wmb2550xσ$'d7zyzx08$_v|rpy2dzôp5Ӌ>j'G6_dVAN4Xިhb|?s%}7B{tVDLQ2b ^KR;#B4AWѣDhBT:hPnd1 c.:!*BH!FϕB.26sLݗx3ң?eMFE2 % rU L2xab(A)u**f`DnhdPgG}˚OA$PEZ$0E:̥ڱkugBIRhfj9ܗsj~O)dze׶! rmcKsr**X5g`s**o7UC5Q=%x;S TJs2%%hRi "s:p%-HTJX[ G嬂E0V!HFjݹGj\V wK]fe~'ޞlz9M]ދ?/Єhᕟ!kґ&h"2=(IkҠcD  ]lͬRX7t2+(z46MIsfY(ʛ0d]eĮ֝ưb jW{ңKaQJ!(bdHLE_@Rϴ -T !8peB> aֆt4551CN$ DjKdW"Qt!DZ)hCպ[~ag_qotZ㡈+#"GĻf< rFE r!*yJ%TKd#󑋲UN)LPKRPwfǾ|¶Řheov{Q0덵Xp/ۙ۵rjqX#)o&q<6א.(Eh g'z#>?Ț>[!9}Jt{+R)̂[R):&r`Y\6FhRvI鞦a/z`~S5{-u goL1=BlʥNmJ头o9茪6UnTb:& ]6ЈW#ƉHBЏFY&mq̀(Hk:gDYSq }ЏG8㾃nIOQ1yP ,k%H7lzD43ؑ˹WpEl#Fek Fsz?hv ]B1$e,p*Bay:b =&Bf,G˴q R{w`_[ lvbvK՝޵#Eȗ=l|b0n3L_&05%${9~òe+NuSdQ_hȍҗ WprDdOmίHRJW-&-LhKulW4j)FE+%7hI~~5-WhxpjJbmi.(GF5kQ7گ'Pʋ>5N7&\+d /|c5 XMktѹ `tL@c͚>Xvzo9mɅ ϯ{Kޯsb4(g_5:{"uz,>뫥GPQ*x:JQwMiJߗOEU&|ӼtoVٛ.goWb7}z6ovp7 coi-woHnjBcKַtԌhlKb@LpT'2e ׋ܴ?O7tmlUͭ.:jS_  \:)i#aE篣q(U_NRo{El#N~ZmTӚYeu//h9ɻ߿W~_?w>p'oɇю'i :٣IQ5?޴MKfCVvﳏ7S+rkc I(>L׋0Gh5I#U0+D6,P *jz*K,Xd)GA} Ǻ&wKBq[A,7G[6H`}^DX%fg-Lyg>s.G6R'&" Ut>lX?a)0yEc6vʼ~쫋#<dt!d` {oy&Fpr\OH9x{ՠJAUA1U(gԙԬA=y{(> KlQ8\!X')T15=1Ec!uIBW$K+mT[tqm8m9ge=cs|t ʘ%ĜRpz6eCLJ*aXIڲr҃Ci־Ӗ[]CeN ΍VddVE9v^H_fYLv`U2]mY2b@PR}L.h3V E&йlrkh(5.T|~!2)՜ݮ;[Q ׍P ,"!br$) , 'НN{CSSBIK9GI[y&o{ψLr%hgգ4 Ի9eZBq}£0%&fz| L8Ԋ#7[ՂSR`:hQ:q.31UL%HCdT \:K<` ,E )}RqEZ>}ǁ;A[KgAq3r ߛi! q?Lh)NӴh$Pz fVIR*pVW|$Fw9`:BZ~Ob{$"灁hM;O/xkfDcQGI8Nކe{=.}FBwf!gBZӋg@ZsH @5_ r FH:*$*,V(p*YNue;1^BG:Xj{PoGMD}ᨆTvAסPmp:mKIC W!VLU>Ga`)|ixtRJ Y!{)Qnjy$%>^UAg&gམVBJ Y8j4lPjLԶn:z&tC@ZXVY*WFh VVДJBu4 ?:i$*ۜI"Bpe 50bRL3ksh"섍1 9aUc:s/TvUen"Fq&_s2\H+jaf8 ʾPb3|jwǧDg `n-@LW %UyabIk֫Hh[ޙ=B-]1UZupd~nx{``jeV3-]F:,spuОgM9*W`h4贗N2a,c,Ţf~_NJZ6ĖZcytd%ӣvHΘ\k!P:d,I:|.7O1h|]b.%oYӘ#:fzgn33nއ4s?͒͋DmN3N+7h՛]kZӒ44M?N B!5$~Rƾ4Vwv 4UP4$MkPzESv[u;u3hn' wiCP1 A} 6WSv8FpϊJӸLG%fm7"ŸZV4ߟJEyE&rV M=qhCX܀<w%sKZz2Mq{V=Ӵo a=0VW7P4Ӫnkbɖ:o_O `c6BмI ~E׆ھcTFen;lमw;oNg+Zp! ;x|\oxvɢ-%/$C$jci(\zL2f^Ja9dRw@FoE!Uc#]܌ԏ& N_)1HCf-/Ç(K(r$v-qկU5]u"z  'dN@/@&'W;_g-o, t'x%.½ǪPwJK Na3Y̳i])﫣_Z9{NzQ﫣)[m.(;$P73?z7xr1m춟N q|ݤ#;wr97xh%d <}ٷYDXO;zt:=fgAVFDAm:+z;vu>Ű|3ˉnb nuZ^>~*մk770}OFf @?TSOddsWaω޺ Wcu=^".:?GK",3O?Gp]JxD֒hD(YJ ӗMaNsJ\Tx4wXQ*Dgג[T\bvb"&KH]ڡ{inG,ƱڹwS Æ}${ND RX(qUTHfx&",I MlN6BzFp,y>_W !E|~[1/D<};ܲi3tO?V`[x<)֑BS;L`%.E6zwaݾ2, ?/9#3!8Q)'x+o1e9 ' tJjn:G[8ng+q,tJ`U Jn)`Tؙ8#cwJgX g7BScI֌W>l݀6׍?M;$6\ZE" ZM""Z4ah׆O=ҩp0dcϢĝceRy NBPж#$&D#vg=pb=#kN72F^\B4Pޑ,wDE9#YZ=HR@;1Ơ2y@pv0pWYZpԺWfé7ǔ-_'/ WWpVIk^#%#˩#vUX҃,"WYZp4WLƽ~w,j DžsAA8Ɓ:Z9*b)[v꘸%Or\2Lbֳ=Zif : SkNF^7ɾ~}M5E.>wDMYM𖠌мM3Mh|0/Pc8a~I\2N_3 >k!$HPGW~}ww̍G:@rB-\\|>7Ur0 Ÿ.@aU94t)wfs1&(As8IP{0bIfʔI~Iu,f; $Rs] ؐҦDrȒq<@XrYZݒR-Wh1D+Á,:v,-)K)T1Fo[6M~fueGM^(uxu`s.3י*Pe yjϥܣD,ߎG~?N16:{ayCvx1|Ahnrui'ڱ'2FܯԢ%Ъ" qT QŤA`) ‚7r.E \&9m+[)ǣ9>l&7{Z6sYFS{JVBS{ }O)=BS{8<2isF- D)N)=BS{DUR؃ `^gqXeiWYJ]bWǂs&ea+lcm6V Xa+lcm m6V Xa+lcmva+lcmD,lcmm6V Xa;ڠO|ӕ_up͑D F;Ԑg[ǀ:\s:9nI9n5bOtq+YlL:g(Sn؛+F7Wum̺})y7BqlVWkcu{2XmszZz}8Je[?mXƒsծ;5~Y5/\FOGy5Ő[sV/3<] RaҜw^t}75c_.fspnQ&{kls>'|彈V_il nE͂5ejå!#_m?K܎J {;c3ݚZʴ6Ť MuZkHo@7$FR DͽT$۟ ̈́RZJ ƃjǹLǙ  ]L MuM$=_㑂#?%KZ4o~U7?%ONbZnࡳr;DE|dD%䜛@1&&%`0^1n&>tabl|iOٱFv<-&.gJFɃLWIi 6 ъR;"bWt\@>t/{~l.8Kif]ڬ,!BY:<k0Ѹ Lbq`F5X,E&+M6Z0$E[y獐1r9:_ipDSdƖF&P#cs%:w Nsi ;8b5Re'#JeC+7ÌjJ}dOJ1hd aPN9>Wsmx8M={6_K- m)OH' +ٔ7\ٌ,8hz(c9nՕmA-N۫oZ:==_^,TvZs! ͱߨ69nTD-um9q=Fg7jԔHniVϮ5uQ2}}z2}fY1GAͲ{_Lǖ]rTɸbpq U-X[\ [ je3ˋnp(S|8_iwUFַ:VW)]jqGr߆vj 'Véw_ޭT5J\M+ +߻|g׷g{wF>{o޽}'_pT+D,/; i7?m4Ms 4 ^|v +ڽ}PiX=8r ׃m9z~}_ܜh•z0 rT~VE+>p3 2Yj}P m0׏LoEnk|6*&%!49=RDT5kSЎA* ;Ά9G y9-d)^ǐ$X hPr-фTI$R qzJ[tNվu& .""b훝C:u:_عG*v^ڎ''qԶsNP‚7fZ,q/Nz0xS=)rCB໲gO2+Gr&jETI晑* U&'UFA@8frLscc I1 Lg4*ٌxZĜSrEN:{YW<}B {ˌ_+u+jKkn78|a?%||U{?B!U0hN N~Voq 5'k:֚[~LhfŢ څhp`2Qf$y:鉶62lM =v8vJ'׉_zrRiU#‡~_m[$ KJDDIDtDeN";OE"%>*LA%\D[`oHN;kd4Jc,hZV[\.![)f1%Tܚ8OkJ 8IM.vk'_]:uTVu~f,dYRJ'CxL9dS^CKWvDM5e]֒y7 =,KH* yo8V`*PO}5*$7!PoD+]"I -NOWi -\80(bCQk"tD,hac`m"ٖ8?nzH;7̡Rz좀E- `:Z"ϵcM0b-XKVtrDIRym>"nA-Rz$q#!y;=`!FZ#Bi|NX"ǜf:O@4*!5,@u˕R>[5[1JyB 13EGaE<OEZzm#:D\,hq. 0֡]bڛ(hkۚEo ȵz#CT ~/3 įYIjΪ~8rg_Ey0I`47 eewWߟ~~7igçEJa ;Q/*$(Y~JGZę~g[vvQ$lαC^]6U:iܶ*UlXMӚ+pavuQ|PE`u+?U%~բP |/O?͗:h@ l>x_l,q5z;|tXgzg(г}\z*QVgKA/ k; $Rr] ؐJF)x:Oׄ5T夃3wU_d:\r"9ߩ:?JG^kM'6DB-ME ;c$h8>ipTȬӮD6MBf W嶘3-_bP-).91&>b2ÿCPJ{(AP]T>}m~_w<YUɬOVCRGf0<ɝw-]G77Ö6DtB[4.ax Sd'G C9ޤH J :2J$F (̊hXёAr.jE:ZѪ'iIA6pwsbR{rqсؑdpK? kޖ}UAZm+A(ե5'YrquպjGZM3?2JV jR yQF e"d1&V)I@Eq9ZOu42QǸָDphiJ#A2ElM (b˯ ~VǹY*0wyj.dg4߆]| dg[$0ڦ$⫚=9:kz7mZ Ϊ@LZe yJϥiHMfn}|:՟Y>1CcPR#IF#lJz2Y RQ&f%](JSwڠ}1m^vmؼWʦRoKhc9ɕJ2Rr'j>FY sS>D=,tDsy2;` A߀$q2+Äz#lv3]RK "@C 5A!6 TNΤtrT@ dZv`?!j{W':B&ﯞV\[[b}7Yۛ%«k^}_˜` qʁ'IAQn"@FJClN!q۔7_rD#Oc.S<*t=:-`Sm^_%!Wp*QS'CV'uo\?Q9#kG=%`!JjA h쀥h0K4IQ!I>y-:oM2 (}ԢAg7mLj+RԶesO o!dK|  hb2Vbф掄1gbBM.6?d͆D~%_kۧǴ|zrMrfSat?agxݦG{iD00#PW{abͻhb敋Q{f}.ߛN'ڗYI=4 4ie}3l6SIyD.zcbv7Z٤7{}8-Ipg~o <8fbi/CH\J[ˈRV{Hꍓ yف^+gO(~Fnmn,l#|Rn9v("/=e*sBp3OL \ei;\e) cZ\r~*pspWӹ$`ȅFp]%|.cKS[mxy:99ǕZjvt|@|KmN;6 N;2+O;2Kű{G aw\rMv7qtl=mls6=яA{ i)4NiblJzm4JEU^:R"HjuOsy-z/&+{}]_K%xEwIΩz\?Hwiؚ5뛹ҭ{j\ |Q7+ߏrVu:9dz1€dzԌ9~2HJP#+ QB 0xMտGMfSn5li.cG]ۚAWېoKxm$iNݭ!6Ch!~N9@ft@RQ^:ʭȮ{]$y]SE\]e]$lIk_ݢ [_8(=$Ӷ={&R8;GeC -k@]}VYh&hV1hNʚ!R=6 :ϷPAg.iN;g*Nd EOk,+Ҝ©"mR$v"[ %Q"Ф6!I,!,ΖLUه3+Apm$Y9FG'L2!9Jm ya2e<ȹ$e5Q#((Fޒ-Gp($ u t"8bgf^5EXyk8ա$\['&\IϦWa*X}|o59k6eḓƽ" &^,w)j/9#DNTʵM53ފ٠0V\ؠ}MzdRPo^P(ϽUHƱ`c)U1H*TMJkbl֌J1]Xle XB½o ]n]fd=ل[W7g_?&O5vH}$˹BD&@%DEn,h”Ѯ{v'iΞEĝceɥ *oGI L#x kbln2+.hb@u{ X9Dkz!RaY"3!QLY-!4P6)eEa}C"C )zq,"ż#(<`Jl ꨬ-Y6N",+ƃK]5"-i{xߊGCNPx 99u >E(M0&@eCsZvׁє}(SP򠀥g\p^CQRcVQ9F,gCX/JR*U/zz׋a\ &h6?$ɜ6o)%g&EP zqzPaq(6jk56rc4x o($-g~4Uwf?.D#QB $=:VT,X[Q2r+@4#NzgJ 9oJ>@](%3St؝e1i#$@%tdLDIAi^X6sP)c(8<8v ʹ MPcg,ť6=DbvμR}7s2zr9f=D]c~\?ܬqFgIrWr'L^;c !hRF r?7z7@SunBslxP}x"|"D @'iŕ1`It_ϲ뫱WSm 1& ɩVߨcy#dFδA| O:Efl/Hi*12'J+2\ !PBɸE&i{8?fo~hחkg[G9RcOx3rQ',x9@):ü^#$?5Q@F ]S%9ٺntuTPő?$*4`Ky6 N(8gW\pކ YOO Y̆˹ĢX?x5ih۫_szzxv~qziͅ$p<G~p&"j֦Kg9dzr 63> nA{yM̛w׫l 3eqnhxz-7s'/g׸;Wwds$G:uC^kY],QhCO3:a2]-&sx>ss 1G]dר]xm֌Z{\H; ϏϤ/8AV&m`Ć/p9C7'2}?_Z ̢/:X<_S`Cs(54kbh`%z>[kx[ hX8C4qaߍ猽GWpi5#f8 #Aq)~N[\4OFh!z4/^/t&&#уFUHULJBhr$8%bm Q4yЍX`܉HOlXpX*.%,+J%j00|&C+ jysPBHn!D^)c7;C;n:U}γHF>/f9)9@NP‚7bZ ,q'/w'=q`e(F}r]7Vzw\^910 DP+JL2όD<&T%GO BrNp:GLb 빱1$CmL*F͈O9+~Jҥ nkC*µ[&v_7=y&Ck'(ۋ>JgD|ZJsb+I˄nVΞf2eb3}YmR ,U5 a99 s@1("` K@3 ( D ]Їh3*[]%=ON7rJ> q6\mࣇ1kݟέ?C3vduWg5+@Rڳ Gɜ,w`57\Jo7Nk4?!YfS:Kn0Dœh:EDJI%)N҂QѲs<2SBLjM<(ÁDB@'DLaY-3b:+h/Os&vsռlu3t޹׬}Jt>' $%H # sHQy*B)Qd *12$ZT"9P*ɝ9bhښ"u тN1 /EJw>-,kqW dRXMUVmٕn,ɳZ[I̓bfR1@'Ӑ@jAm5|ɫo8C<3>VgrǮVO8~Q*B㸤p 33I8rG 4r,]W8K/^.ۋ*<дٌ]ns$t~M_6W;ib#4 smw}_/oCͧ_5)wM]8;h^gfq*ߗ8۟l rwN_VA;>j\M.KVWos/oGR{B. qJĠ*NX 1ZH%qiF9]}tjAsVZ)$SB.%![*"6Fu&eeW p-V@? /..и/==/3*cb_%v-r Hwr,PHPʇ6m,tt޽zDn$m4Wޭhn|6w]z3"~b..tHjPvu3i{hK6[.l; }}Oz^hy=XfnAߏy5Sٰ%vi>\pv4gß5nM搡 \j< su#K!M{f異9ˍw7dצ#"js5\b9rjgR-F''Fx1A荖K6-]c)xE.\3i]YdFrqҸH"ݐeG,h%UϤkC)Ͱ(R}069uRGɁ%b@!% RL=3(KXU)1,8c\䅄$Iof!0"P H3yV\Ai*#2c% "Xak3=bšLvH7>8UJT>[N2P%,aYrh=h6$9T[ v5 ko:ܷ kk:ԅ"T&H.p|KNԈʠIU醜) ej@Âs#fz9ػ;t9{Bz Y$ StڝQK˩@rdsȸY4 K%"lu"?|O-:IF*y!s{yqxO+E+EqSe~!a Iy-&Ղ݇[:Oxj0:XqYh_4y٭0+CٕrwtѯrdXۯ峰8mrsn̷ ߶ O-lXmgi2WtfPG!rP+OGdw +RW`z|-P+ĩB[TW; Η+&QW\ЯE]j[WJ#:uӨ+#kVWO#X#l]=ZcFճYG4)+8pxoMyo.ۓ/^Y .ӟM1 6}݇/_wFsf{#?Ham]~P"m3$N`p<7 Ă9vq<CW?∨뷧exh|; 'Vvߕ˃e\{];ƒۘ/I1`]'ԖDTI$a+#Wؾ)==䘉o;ɹG9Ȗןw&e ϕc.6UhјL,fۛ6;iŷo]^?k-;(kg)@rͥv4%3Qe=-Jޣ;ƃm<$3S[{7yK ґʧ͗CT3 $Xi&H͙0:T V#m.e&Q,de3Dgr`\9d2Fimg=fcJ`0SI+'bPZyIqR[ 0E)c3ٞO:vxbEgs!AXKvm7sdQCÅu@o-;# YpÕNQ{].e2HiQw[Exk?t4~z/6rS**d.%woMٝٝ7d;ߐD}y6Ľ٣frjt嚋t/*8:*#Xyr&,-st~;5֫NSgH&G F9Q{kL'(Kd:gL%&õY#snԹIxhf&F,}g0z׷ϊ鸆y\ޙNۢmߙzG2W\i-d fFuO#wɊFh:4e-!AM:=prАꄠ!I=;%7[-T*@3u$N$MX)AELJ(0]уE֨ NP>N7}<TE$'L ^еR1gĄ1''tb$Gh2&2a]AE` t$ *[-+^~KE!)Z:a6VjlT(9-xr sg/>:Yr*{Uȉi5^Wx%\V~ȉ.9JTE@qa Qk|1{NUՌښH+oM^P$mʤHs>t5[.2 % ] ae{%|FGqZ~O4~_?NY^tv L&I%J2a YR|/FƖVfi(ΞO?=)ڔD A.L. pN爙Ec+kjla8H .hjDNkw:MJ@,2`Y KE/@irϔ2%R}Al\V^TՇYi U2d$aHTm(ld8>&ɩF}e}Xug9gE1O_k}k/(YL5GGMA_Χ`;rz(fn܏y5%%vF~ .Ŷ98Ŀ^?rC!|w?Jztu|sRەZé"xsQ uՕ +f{b\W@|tz|VdUs~uQA& 1k5gϛn3+,3PUY골ۮ?ǵ2ݞ7 EvXn/rUk\n'WzR֛ YnT`B_{:$|sh{KC";`%0}su&.jn)^v$S54?aQ,|/)1盋\e=ktrsW\HSWMgQ.L./:D%jDg\㤶M4:!Dz|^+;dgѷ'#kyޚZyV~ecI&18zd*tdMuc$6r%6F/%pȅ ā EzTH@(gB K18#<צ'/F.yK򇩅2fGZkزԣMg.w :3@" J 9ιRJ]@xNp[ڊ k}v(Q2G0V&CTtpnH"bQKU$ގu=UE wj^瑛~'Kº'1< qM۲LтL̘%ALt/v-,AV]l iM8VzM~A҈ң֖2;-6pʂs],F!dXS@V`$@Y"ecRF9J^V?C+UZ{37K4_[ъx @yy?q {[i5X vfp{.i kCӆgZ82ЗM2Rg;6 cԒm%gIjH%Қ"ΰUz\HFgsU8<21է|=Ÿկ哟/'f8eqi;(ǖ.dIqW lpa:JRb|9:2 byևO0i7Ӂi.{LR%h5i' Q3t{$|~\jza̒88W27~hTP63׻7Ggo^MogWO/A F`R\װ1ښ_b-G~nNH)iXҌjմ.н}4lTn(|]p;)\kEK++6l2eopA=g]B(1]@gr7e@/گsy )؂o:]iFkHxHDp 7`d!cW"an[BZ=PNXY=QܺtRf{W8M/aN(Esfͭ0x~Ze Y؋v~Fug략:iQ*LCd$8YI2()(} 5c ]V gݕB{) Zvie1̫VOx_\mk^,`r%A'>LIBv!9ᜦi<8γt2(/VL`^%/bۭ a䐤2=xaR,l!(pZ+kCÐnz|5͞zHg~B8C^B@"y"%Zi#c/CnNq^G51z k|pf ZHBԸe,bI` !mo Իt#2\*q:La*GȂ)p&îXy Wj_=ØFT>$|&QoHEǸ<:%2;@!!)InN=G{,L?]lg%ѽ5]~>q 'Ώ'~_ fld6/bT^A=s>﹢:CYWm_ΓcU?n/_~p/?:*+hYO|=0hf/OynqZDE-L!a⧜"'! 'f0`s 0 g%8Ł){~3$Uo<+_R%a{lӢV/k >"yaHzr8fFx 2|z6zcw!qKn {dRI`h}(rSlRzRj.y6,?L7 T'mDu[k_f$xkn:,[cR;ĬW逰 sVk/;^sXe\s+nS,rfswr ^}RlRڍTEdx/|CTse \HACܺ/`ePx_*z٭[bݿQGp_ 84WݙKEYC8I17|\0J[T\1;k,7`?=_`j^BtLۭ2BêG)0))gr ; ģ(Ta)9wa'!jw O!ZEf9,ga$AHA4Nj•Dŭ }bvYo; \wj 7<$.斷Vxcu3Ui^8Ǐ,x@ILG瑑V [G&ipHyے3'xN/o\00QA"q#Q" bJx Yio H;1ߤ89cH$!Tj#๓s&R‡׍ˏGș;4>eI+Slf~><)+`d8)"T $-bSZ}އAqXbVe*x9lu3iIR6jy } ՄRk7 _sTP.!WYѨe3P$GkpZ7$u )I-*H{m`5jC7AV4x6UU׶nvjǨY|"%Tx?1/\xyCa>kEIgbZHYm=녣ݕj l/u\c}(#<@m$kDb SQl-Bwbm]6_Vd~wCuZעXg4вY8&Ԇ jZ\SmSLHAM҆e3Y`t+ 2Y^AcS%T4мuB ϒkfM6RKO=;ش7vۓN7IzЛwԅN.VpT..p`@]O j\GՋ;f z ykz= @coyVD&iiܬ 4=Mv=9_8MLqXoz_ԂՊ|x#RaFT ˣARxjX0:]T K$Ü+碋6P9o %O}zoЛ0HzVhJ#q$Qqv :J;&ϫZb X(4S9x&g\1HAӃQЉ]EA'nku 9*h(Y6+ ku]01aYÂtzb1I>>p֤| XJ+q):xZoUݻN\Fڻ`9=<<<|Q}TXik31d[JFL+X)mlm엯_e݋#s+]0/03We7 ۰Ѯ/'tyDk4=xlW\\ܦ렺V77o_8+K0˽$gV\!sɝGTT"h&Ih.&I:-%t!w$Ih.&Ih玎$B␢Ia-%v-%n_'.ݒĜmv{P%|-0a69ar@h`9:mn4afkvnh5Onz[f>/[&Pp͍Xq9L$Y0rCq*UVGCJ9X &iU"bK(F[ :aFhNPƖ-iRѾdn}sry5s;e}* z,mq^I9)BLir/μ~"ZQ) }8pйљ^^U@d⛯#&?>LLO,Mˡ<'25݇A_ס6^omm4qnkh|_-Un4'o_'J؝|i,oy*sI19Z M^M 8]\LI.ì0nTL"x:YiT&i"{3㦔FZK}צiV*e.q[u^}]S.]7?BuB h> t!"H+~J ~뻴v+:uiںMI[BK/i뒡nLXGN߬ jԍS]ye]q1zv[mַ;sY>0/in)<Guz˪FMqRoZm!6U<*UȖ={Ξlz;R5֧2R^@lC9swiQy4םS%,DSw֣xltu|y ]A=A,:@u}wufOpc01x1Z:b趣_%%ݺ$KuL6 $0=CUޔp雟-w6h^E)r`oꭋ-5zCI6{\F7 fj$h`GͶLPj7{/S? n1IŲfUV]$olhmx0n9ս3~_g|3s ..OݨakEJ# oRQ@-h)֤s*2ads\tц;ճGJ> L9|҇d4%9ꭶZj[GG298rg ARZ` 6Lx1(7ּ7v+WUfɬ=Vs=C)% $4d4V/:>ztNDI\w@'_RRq̉*-WPR,^|ߘjN.qYղrZʄ2+A}0yTQ͍QmQ~vnmx3r:#ȟVVf NlNtכ0ObPN:X%Entl@-` fP XJ@`D$+h{/B*#"V<"giB2Z΢`5J$H)$UOgO&qP+·ƗZ#/ɇ?;QG܃Mځђ27XK=rニE&"MLo,>yT6qˑjOwEbzkZ욁A'oD̾QNaaTu&u ▱׃ A\d0tx Lq8`V"ĂX2y#u<>cg/:yp``k.PHx3jS{:FЉ"iZ+Im1b2*(ֿ \a|n4Fkdq$hws>=?M?ݞ8]&} IX&ձ|jK ^Ȍ ^`A Ɯ *ٰ0g쉅M|X{,uǣr#l$C0\23j[Z+ a!oaPhd0 3Qk % 6N0+D{CJGt`ZD"rReFlFl;XP;w j[JG/E`  ,XdAS!i' 3Υ5L(ud؆Vqr 1(Ic$T8FŽȁQ10921g Ʀ""ΌGq<rW "=x#5KҌ!*c0suFHHqb ^2XobA% )3Ij$f.eFlt\~`-bxLl\/.̸Hz\qqkDBy-^> b8]锒a}H&p fR ǂٸX)˕ |8 Ap}E?JUcK\Ng7#qP~c2Ǒ[DoG‡vqqT[*0}QQ&ʪ*68jU9x"N“6AAu'>Hjph mڡM`'y9-]\gW@RUР{Mٙ/*%;?[v1bO;٤ (^n^Lz"ж+%byxZ77ْtcpyV4Kكd3:;l:f)9x%zXMmH c4R#4Sj"D !NH9V\.%6!`B"0 Vc(,<%(Kҳ1zG@V_d R`K|.Ę,6NۇXOnI]TeY\Yv3f D頨`)Ak^#8`6 vPMa*b|Ⴡ_) ؆FLgKT;"E RrHd 3âҌ;$vDWQAPvdbwⓣ b(`܋0 ꗻY?w!mXPy<)."0fVR#8!& A˛S. t %'`SY@ȒF F@7R捴W=&1B=ERIE"Nb`3=vKOBU:.|B8~ F'I+k܇dOEV>+ů[8eC3a+E,yIiJ~EvW=S\Ц`A0Qt\erްSHcD0+tz!SSҎoN9iAlS  C @ziata3\6r VpS\ DղXWp~y5\h|\N`E_M ʫo 'ާUe]NMnNESu惗b8lqiMxt[ov9RLU6~ FҬu$7t6 i57,oa0V3XW6~:,&st9oy9\R8*AG]dۨm `]|LDŽaGj>Mg.u}]bC|i};|Kx◺Ѥ~\{`ǯ_o߼z{:_/߾$W0R0F&G@#7^~}hr -uoLxq5->4,ǭّ[jQo&QR":M·R,QR\O|_a_!pMUݡJx]w+Q]:(8ҮT<p_;^ >Z&d#MYHXK͉h-_r1 +^Y [Au# <6ҧv6, oP?#ϿAh `&b`B"b58 T!ӟd99xbxK>+R'd nxS2|(C ۙ^.9I./Aջ⤖ՠ[$%ӊ}Q:)d&gMޫ*sbإe6^ (ym"STtҮZ8v,JmEt/f82'׋W;+=&r|W XBN]r2aݳBu^z$vwρ ' :[~L<؇:9Ч}JS;%N's`b`RQqP3Ӡ:RLaa ?"AH}IxdOUᆳYs6+֗~w8vS.9euE4CM!&;`i!oyqgJ0|9%xCf/yk燕ϕgz?c]pIC"Q{b0' X ur$03D8ˌwJ 'hͤQ%&i4Y.-0`<0"ALjM<(ÁDB>n#ՁHO1AgLU: |LSwO|آ؞ѻɻ񡰷U4݇+IDD*"HP:dN";OE N.C`grY#Tcchښ"u b^`Lx-N$Ctܙ:g83xrﯝzݐ~[*+ٌ+z$%#dN|r$)*8aha<#޺ȱG=rVB/ev[D^J?(Qbbs YtA.P S񚓾uϭf؝mͽY Ė\Ŵ{H~h]x]L䥖CW7/<ڀF+nwd0um( >P< ALL pL;_@ dZϺ G{>KbϷ}8k}b=;f`8/^~G~2W < mCQ|yGDZDI*AQ(ƨ࣌'ꌔDZ􄈴hO~UHpF S Wًr5znS6BPL># '\ -beT mCQk"tD,hac`.bz?(M~Eyu0BJop 藈.2n9w$h(#g;!kĺIxsOC~L$8ť8^h7SE*̓*@{0bIPw5jh-X4U'sMhǜƀڀ ?".V PbYΈX;ϫSzsRpR3Ey"-yFuDD\,sm9 ͱ\5SPGk:ӣU5}Ar:ܙUk yMP1uG4:\wﲲm9 ޔ~ZM~~W;6aW ě^ /F>?>C2ԕoVX~ZT{L \hvܯ~reza ?(w.hKۈкż&^["si*vu9.~Y_yw}{5-nxQIy9f AwčfomWCGz{U'0֑^ˊ ^# C U`XIP&s FƆTlgXۣP{=;1i@.H}^xsȳY˵pAb"&Θ '-u=;CbuEL/q\|(]Lpi >#K񷱓K+]E*Mz7>G |HYupetzm_Khzo[*A/9ڪL_q!U 0DEE}0ìQ0kx _!0dxK6vr!i\&B󝱔d)p!GA_clj` GA`)8uQA;jW-"G`oґ:>8GhGG,*1T{[hvDqPiqae .$d""=#}Ha#hZPRdEmip5b`D"È2u|!I@Uqcx^&--Xi#1H:;SEXπ?d~3YEKcc!SYL"M9|n^U_qobxH*K/ߑ? huV4U -W\ ϽƑ>h}w攃gW\k>"gi)Hbxsp6P%XJKGTJas)B_:9-b_̏\|q2_?DtgQ粁{J0_%wB Ǩ8 $y>P*X-A,B0 Coa@8!"h ybZȨ&s -&h`%GVz -~m33f~w3Ī,nv`e†'IAQn"@FD%ġmNa< )/Z#O~X0n x:zhzzم<AVt QLb!$5p1cYrh|p*q#'E?;'Yd&}9Y smEmJ=+( vag(QԅnT^(f9eq#ZD%I DE) q* ,x)ff(o4-Sy"Y-pu!%$ (\W,Ԑ_?UqG/ʊձـI-x3MһWm+JwWVnBg⿇1M=iq:hF_?oưYVb׽&{J8W޻aE:o(m|h%VM?hc?Fe+ϴlNy;٭t%,9hsMﱫ*y'$aD2Nd+ݝԧ=tj8LnO^}M6O|v`g:fUWO]lШ^],,f[d>ns'] txWxPVɏD T>mmB[K^qV =+EkiÍ K;ݮ/f%Xw]{nfS53"gD8.] Fȉ6kD$Xu&f'@: O>ENFg'>$Ў2!:zԇ3qb=^"z  'do:1zytUvl5h5^]>ͤ|ߵ8Gq۪S{7׽XJb}#\Qy6/Ѫz|_ /p ŲF;u?JsgYx|;%.S΁v+“s;m6vqvTh#Ab<$"a"mr1~vnx%9|# 1-LL>*:֨0G0R(j]`?#~$x(u&(AsEb '0J@fVDCʡ޵#"6W0 ngbE[cEHrpX/-[v:q5EU_U^ `'t5:6W<70V㾗\iYÝc߾.s[1vB O]?OXwO:o"zcrݫ/5BXo~M|6saܿRr&sȗ4M0ѽQeSdϱ?HWGR\D7vIR<ժdzIbmwC<97LVr뵋:j4c5Qw?Fu(%%M.z Am /YϝO6f5K[ ]`Vylʒc fRvN6gZzLJi95csX1]g LӅՅ_97.xMa}hvo}Q^{ǯO;eE6R{e %0&T(}^$˄q6ԡ<ȱ5 dB66QCАpduYCp:G,۰n; i<w7u!Qd(Jm"d $aHc,jJ GQ$E2x߰>l6gwb<)W#5"o5b7qQ!^vP?_^VFrm̄2m+@/ 4goTjcrm!}^ xϬdDmhI` 3>Y љW08%fT 6Fi~H'ˑ13zɚK8˭5Wηf'm-DqU=Ĭ3l7\iJEOgt.~\bVxީd`t6uN5QYM/w >lb;i1qoH"F,idhꍫ 9ry :*# 9 "sUPUY'\5t^F|S}4:ã(UL^-I&8$$Ι0Qɂc\+52??Ycer =f]-}pU4֗ϓאYWKϏ%t;|  '蓡ݫqQ{.8WMdNFyNWZYg >etd\o@ԁ8У85 " LqdEN茴̳! )A |pYT )yL,+L6e:C* yZhdn9}`jcʣŠS8i/0ټkxXá՛` }H+m)RtZR -Ruŧ^Ք$XXY|y:책:ʯf;Wk,hilxn : &J@ <`&LC'9'20?*)MrnQ`#(4tL *P+{T[;i>v?M;z43u+\._/&mݦpXQ4$3 31DR9h<N7*q itkJ$,NEoyȄlb *Gtn.+q `0k,gVoqE6z,Ȳ4|,GƂQL%Hd+4Fh-rl-c=6b!x䐳ΙCH%:WVR ү)mma üDeO/~> @n[»w\["76K+_KvtBd1>ZWԺi]MZo?fK WZVrqn7-oo|/eZnbotSvλNs72ο0bXlcgM&X-yL=/}O䬶Zls@5;_Eۇ^u} dU~Y2# [qB@ʠgYR~q[zkdEKֻ&R H;pyND2Wkim1)4Y;S ?e>=d؍jڲ-oV+\.H*2SHI *{)PD w9+\2;vmF(QWzTnLQ EtsҢ]".̬t][sf[c.,ڮOd:#yFuy1d.*\2qj:L܅J'La&näԶqW 0pV~[-"Wͥ"P"hr[N'v!#{wP<;3Oqs^`d&,K2ęd1td2uc$6M&4#ʒmjGBE`h@"o R*$} 3C2Xj^Vck#ze/7W(Ɗ0;={Nˣ*Pk^rkt]2 (C+5 ptgV2rl /~u/&k+.8G:Aq_-|q{C&VjH #åY;"'MVblGcv?v&Uͣ&nus%<(|Q9ę_Xag;EmL+NЩ&礳nWĎ"utT~_?쇟ޟqa۳~Oc%8g& 3_ƛ --04&g=]u)h[8rkC/?ӏQ,7m95"Uu?u+D6yHwԉ.j U۷?ZX@zd=>y2TO>R2ď؋X֌h9 **ΐ}N6pHUHA$dP0SyU-=_0WDNf9F)k%Կ&s .;A/Xp>rA$it<컶.} Ēe` 'wJMA`-q7NejJ|uKr4Y% x+SJ9(L{5E\f}eK"KK 2:qϳ h@o184I1[ Pd3%װ 1압G7V*0{/j3Z([wrC VŠu%%XSo8=9VgG)/@ 8ujB]RS56ZTPriNF.EH6^[^k )O(Sr Mt0bѽ\ƔW V"BʺPoV.W *-/$%#jQb7`jAc <{S0sLkېTv~US?[YsĪT$5}VuJk=;FKq>A ^zͺm߷n z4Á0wޥKlҭuO~{XId_wyۻ$}Np l Ze \uk{pխݨ^$\6$èݻӓaBco>b໓A砞V T玽 T٦}ty-9**(?#g &~5 _ .Z$՟z?싥(朤XN=5:To}vѦo>/fwonvhuU=NkP谰!3->~\?Ǜ*9gQ F=pzBoHWXWcr~{fٻ?A(:2XǙv_̾Y$p=_NVJi#lA1 ٽBp]p+ުkO冪Ύ{<#a9lWVuqO抵~M/nBnFVp.h9”_wa P<ҼjdU4"&r "GsL}Qq}9g)<̓.# a7蝅g([nWMmNYyK6)I|7ziX>=C('FW[j?AU7:ȇ>W}ɿ\p[>pzz9q`v}g曷on~͒ݭA 4uC~fya҆:@ 5׳x%L}]]ʾsg5U|9y|@d GQb8bH. ;P#p)8ƚs(/6Q{P8?kl۩.]1xӊ7Rq b~tgl'Am{c> 6v4bh}w\^Sr"Ya^zřGa܋n[%\ɬ_!veP zz285_S}5%U_SBWTpWŸN q񱢉<}O~-O-& pZA?C?wvs}we4Hl zRpOOؖ $ m*ᆴ{R(NO>n}yOw3ݧt7k4fӚd0ɾ{okgW0Xz6p5 \uk-=uV+\D2ZgW;O`.k\[KU+\Dbz8gE<)G'}aJb:ر✍;5Loh =|rqiA#TC5Qnsrm@H`(n亵^`~tRꇕiQgnҒlM\b["]8;]zI1jU kP /TL.4 Uq1;-cl,ZmwBC{jRצ B3&LСx`;&6O6GC̨74BnJk}VD%Itl*4Q(7m[ѱl O#Z ׊ ]K'Tcs.X+bW&0)WM*B@N",=G,28F21N#1gi$0:Ҕ)*rx?{ ;"ZshL^_eɺxt?!J1yg nչ?n{VeUM)1<ΩR.ml՜ȵS4JgvyFs\ Z#vR8Gqu~kc6CZVqi)j,I;m'lTQtɨ\U>c%d1Z f;Dr+zV+ JAuTX5$x󥢺vTFEz[0A3V\ ƒO- OFgbBxSZS簮}:8 ea `)`yUvɕJB% qcr`VdCD8.Vj!T$CD\ QQFA(xԳ<; JQ @d+8AA]R콒Qbkh $b_*E*)To*B¿(:o5JΣ8PLil-@@I+aTQ]j>VҤV/%M'27avNXvńTE>b(8Yp# *S^NNZC/pfϰsEQS8Ь+Yl[ty6MP OFu`qP8O[G%@ɃJhhtU@sc $8;ǍeE A-)JP$RDM+k2 [dVLtaXiqT@䣇vksw< om W|tPu~R& :UQYL UJjLe5D!v "}X@w's6hA/&9І(T࠻%_>$iӒmܠk@)@zc D-"<ѓV N 9Ӟ]9*չcAXtTH2g_I3rD`Xj59ڷD ̓&$שʀG⪏[?n,AL  3%VYr P  hwEDM[02) I AW8KB DRf5Z, 餞;k4QyFFr[7R)4WoES joR[A AW ~ f^%0_ q{i]7֣s{<0:_1ΛR2? Qg7P7nllthfѣ9EvMʉQfh5W5fߵF ڴ!ΣFjу%Bi_4#hBpݐ𠗨5aĐ 9P/Z-"IU.HwQz u %t 4 D!0@=-2TߴUah2 'bV"4ciNQ!GoaIygDyA*aha c,FZJ:yTuJcpB)clF% b48\ q_n327)gj $CcMZ Bժ&EߙT#ɐ+U9['9'Mq:QV`?샩x j+vwmY;z?I &~8-)RCR~}m>Ԓ%K(˼1b3l6{3XiIq8r0T^*/hl+-)vyͧx?aZrg r^m))FQwSN7'AS|0C۲N6H<'4p^>R45 2A"}7A\Ί,nSDYcL_ H*y"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r}N ͯg',6hPjCNcti'99 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@G2%94mV٧b@aN @9 䄽߸HN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'Ao]IN q\qZze tN bBJr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 ij=뽩V68y9TS,u^Ow@ciʻtxa=7.!Jɸt ƥUo1DDWXrV\CʖBWֲtqUIp,+:]!J剮`\DR >bO+DRz5^v,>np=nhz]톲y/J@WUυE6CWvDW =t/ǮvB9;]!J!4jr9`áXEz&L!4 AOZ fST8 &E#\U ]!~:]J&:+ +ψb [ ]!ZOrKtut唲v@N *] ]!ZNWҒ:FʲKvlIDHDE PrF_z=d }xu`)j7j7@j7gzDWz |j7BWؾJhasko8q g9z'O)Y@e\i\" Rr~1iv(׏7%}'Rj&d5u]5"JI*os"qEԧ ^x$ofSZ&*yՂb/zۆnEUS|UC4TJ6VBWW 8GP2サ|DEp5}h\URf6pФ(Oh'YIOXӓCVғP^{rRPO{r@DWqW ]!\iJ+DkD Q:NtutLDW Z[ ]Zu(%:B҂Nb:<=gS j[/ϸ6&y ַMN6c/^@`EN2kFuߴ#N>7/G{?}r >gy Ԅ]>v~v9I\^z`7 f?r\>)ɭg.,9=Hu̜f^HrpjyF*(_7(:*hS`{ȩ-G6.it=p05Rj$ssڀһ3C=c TPn@R]7 J-I7n^! +l(.soty#+ B+k7ȈRs#+ BCWHV ]!"#JLjc%dI߼-3p]1߼=]JI+Xnt]9 b ZfNW'c+Vx_]!`]]p})thmǮgݝ;uϹC} w+]YV=*S]q5 w}+@ٝxJpk,/pPɋ++DY"O+@$:FBw&RzRҳtд岝)AvYI{-gĝh{V"Dg$;w}In#ƇVh4z2Pn+9:[dg5k&zyd7 [c.$ϫO^>f{g:Wz ߣv8?/dn8k= ѕ;b6=WWնQ4uy}z9|u]o|ޙ߼/2oabYj/a}Ū {g&D?Qz媺 գ bg r߾] r la{8W5E:Gx1)'ˎK d![sy> ^H6,rYު!QZQdOAqV&5zi}<@nH9;@㬙SFSzY=k:'Du1n1h ޴X (vۄį}{t / {pum~\_. ҍsSXmsS q&]lKylvfwZNX-sqҿ/ M\{Xg!XrJˁ,)T0{m+Qj'@F3%@ %k磪 Rgrd=ogI m B-G-%&8E4 *3fC]3hE6o; <J1=-&ܼP] ulX0fupWׄȹ͵+06C Z}6^ۀ {8 4QYHh7"ްn3х̣Bbu#KuւX.z6WV=b1r SJʯ{|s׾8tɽYA޷uB?~|u}u@<ϼ1n7x1V8 Հ`|hx` lfrj8r0V׾e«l.1:[./H#o< ig8[Ej͇h^3?n@u$(zHC^ކJؖt#·} sBg5@\%/>ayoއw.|к{$?y$Cu֡0xѬ/=p·>Ǐh^\lp9mhp ~k m"\̦<͇)>)\ۢ} Ld^ <2U'ό:[%_>J4EgMYwjdV[v\w6ݖߏ!@(x:^mvfr;".}r|Yp2!w$\wplx- ¹7_wUn^NLgH5EV_$oו]yu a.@{ ˻=| a+/No+<;9ĸ)<\+8 *L?N'mx/oߺƋPoy#n_;-/Vj%u%9կҺ3Oɢ! &$ѵd,}.AFyu#evEt->LK'bN>$%WK{{فܟpz >Rn l裏CgkwJp;j['Oӻfo]{t۰v|9Rg@9k]d zX?uC[2ʥ*&8rPc(=VwiCaٿDtA|7Mr!F8U70jWXE0]j}*}OܹJA%eo^ SuRATGLPx-XNF6{{>)G&//6Ski{Sn%܅{?#M$^0+ 1WWnӪ38 ?ۦ֑Vr~6)RFVQe׌D3֞O (|_Li?WMb`"=g{|oꐸ51*ӄ= rۘhjkueh٫HJ2JX] R ]bq {{@7ڼh_J=R=Nod\.~7, kmH #e`?fag!OgT8gH8jZ=6lIQuOO]݊9j<+7k#6KFo˸o{] jX>KKmOiN(Bg$цO D8 "d^^ 004F'pBD0#ĴQEM ^&5[!fMѢ@$.}D 943$psr&щQ1xΒj=+ Ŧß`;o+T&/o"Nc7ۻgW;jy\`>ʁ'IAQn"@FJClN> );_B L SxfezTn p;]v U1T N9 ˀeu1;Ol~9b"sǯΤś 5I@.4L`jѻ8mr&=km#쓝t}(yՇ|ۢt//rr`g0y62=Ҍm~sv>K'Yً7ER'̴kIKu|YI p@zACq<8g 53^mNuҽ=k'sCwYөi [^NH*-ULH$#7(t4:jlG~XW䷪; eFh2#OC閗؜FVhF;^C<sm<[\{'K|[m9 .)Gd윰EK@$yrl1D1. {BOBkK$H>%oح:Hxqb{*,v'T7>VK S~y),Y;f.Vmz{+?-.G+7+t,}Wg1'ۼKY*-!{cwv9`CɼSs͓(:+|io""GD8.] Fȉ6kD$Xuj)Nt*>bEF e__hH\?O}</$ǃ3h-L{BPU'ZFڃFRodṕ>]Tu\\?ycƆ7Ʈ>J9n v?gXP/q:#?Lnަ&˘0x86^OO?~s%P'H1RW-g0E#` 8^GR;mB")*'mycIF%G0;\KrZ"iy>p'pR9& hN(}J ?'MU[6̣&ZO wm yr_ /z`)2sInD7"i[4e7BLZNѾF-6569q1y>=b|rh9b~{=wAɲ]yrxIrouR~WX8%]dDUg j%BpR&V΀EὊxmVZΑh%2B} 8l :%*I% A42~d,bq .{ޔB3`pPZd͸# ظWEַ_Lood2<׋Ch#YΥ \$2*$p.reAvM>~G( X1<.C6,A9YM.$LP m;Mb`"|L+]Mg?b4ŸDPgj+Zġ@ $gB(`![Bh mR& ˊamCjyıXKsTpѨxXl:1W!OƦ/""-t@wq< rWɐ'xH[PswGL}cM6|J y?aca18< lOEJ|##z_1+VpGBя#N\m|ǪMznУ nEłe*[yTD#x:c 2m"Zz` .I 1#+ }(q5^*?s\7b63%#'J3^%)XHڀF+J">9=u?6 {Mgݬ};Buog''B4nH& h D'o~/%l$1D0$^[y獐1r9:;y2Ddhua#HM&P#c!W? ݥmM'_;ԷQW5j׳>6oF=Hӭg7u(9=YuDAT"O%6G~|}'߿}}ezݏߡ"\c -) 0 <oo?4RCxC+1%cqcvDn8?爽bsZ&ZpeQX NU[k\D3ʠx *4|QH~$TCCXw]>&d#ѓ-UHX+#H)q J")hGQ )n,8NFΆUwp "Z"R!I¿DR &/d3SbAm<ÙN3Z' `"}xCOvN>~;:lQ A@է] "Օ;TzּK\K80CŲg]Oj`|u sGr_J6jv!8>/T/yBSj>MJgD|ZJsb+ϓ ͬ8:G*ΪO;Sf'IgI1T)(T8(ϙȡf@wmmXyٝJH^e{gze^cuI-)v_-*LY%hر()^>~\'^b O0%N3xY`]!tN2+Y4I8v6\֋JOGO1D1RWi7>)jJԇ(D~2'+ [ t=|"XCt~2Cxt޹ l:""$ dAZnDT,d92_z*<&څhe3(3Y!RD[)L?Y_Mw|a24B<}e&mnM#W*m~hR%m1w_^\9'$Q/QGG.7sb(<!닔lt2-"9P*Ʌ@4[mMrlA^Ƅ`qS'ٽ7W<vmkl)iW^e[Wغ~ܺZ=h6;2 Ѭ;Ė\f߿mƻ;o=z^i}=d[U{O۠y5Wٰt`>{4lonM%M}xε֛dJ@(oF+qMjrmGR%%羺V( CA"[QVp*-QD9Jhu[{kQ#v"(YkB`rc@)$ŢvʣV 8PI./Ɵ+wSx7ɗagtԵE=:6~RyaRҒII^ɘHb9qr]/4w7A*@i`>/(LF&eGE 8 d9tC1qvo{iX+ywpJT?F;Mj:DguߴfzZ4uP[z<֤7\LWKqҮԠL҈ (8WY`C/P\J٥UӮUҨ \iPiQҀ+Vhz)pjy`=\A|]\/Kb W!2zJ9=V~zv:L+mWI gä[p+ձSO2/Y(ׄZ\s)pep+jU{ä5p!TpJMSm>^ }0VBX|hj؎G-\~rglO_x y/|SLP| u m02L_rp'f)r~zxЎQ^XŝIZYbD4$,R=Pw[USZOڋBAQG'ϟ6狦g(||h>ApA! ,їPZyjyCV'.k/8ù'Lr_ ꟟j?b]^ݾ1|?yƚ5\=ǧ.ǫz/}آwXOdT(j9 qT@ąbR L ‚7HpC%k_ y D"r*c8'rQpI.>W>az9~t2Tu$lH%"8Df>x t( 6DB-M"$[SG%Ġ<>Âxx: =BO<%0Vv Ql&1!'$5ǭcP.8|=*р :ϫ0pf]8O 7% Tv0K4I}T'FR%O^~+]߼YשE˞ݗ   RTZ. /d!Tn=?hb3G{=Goz#"< 6{U?ȴu)%zjvqx;(֭8z> [{u {`u {n@?RAnp<>QڍF7[8EpVgfwy-76շp>;L˸!ͻ:ꢻEF۷F}9ݛ.d:i`46LMbn~}oG-,&ǺG0X.l}//kbvԥX9Fט֣TTDSY?*gEg5P 5AIE+I!RpўCDfѝbGR>s`TqTK@`,*|e) NNڢb1J&ijb #o3HY"!(@mBI,QL=ͻ[!Ka3Lކcyrydp˜)[r,EeD XvFNaHG%d|IlI[y#wIb rJ'Sh_`)qvoXVSq]W|c}LG|F]n&Hbu Mh.w]=~Un}\E D *roE`.(Aΰ9EAd6eHRM<\Jd 6*I%XJ A42g72*Ű8 wXxȸ̌[_1Fhngw3<^_ #vHu$ PE" \%ԚDEZ4ahW^o=eqgs XMK$vDĸc"^ˆ]L݈ƣbvO{~+ h-GF,qTXD,TL[Bh\ڤL$$N | )!XKsH G(ucTGema<,&nmxj}ӧO?Qv]m˺#|t*TLN.[Lc u}&U1k#>D!Rg%'s܁p)]~R6~R?N,3)97NIdm:ED4JR"&˥#ˣer޶'G 7]ul"mlPګǺ̜USjĝ??@ JGdd *T)Qd *12$*{CErY#T}8o5Ej : {^rLeo}.t\K]=`|O2w#~=UXĒӈW{ި7F+Թ*k~P T~O vƇ|sGs⾥9d5^m?b}o%Y:l%lΩq,.FG~yٷX*0AI ˾چ!"[Ru Vc(%At}3nw5bB}. &'TxBR,jZ Evu_42-%tTu5a-aWc)fnE ϲh.Nuy(Qյ%eJå.q {f,9vs?K~=}5>Zz|1! ;$gJrN?qp/ 7en-I 70RN(o'W.8#7|qdtpvWPL2YXNVB*QM۫鿚*)t||4<TvZs! CR)lrZͮC(~ǜ}tzF1P>5x>ZVGT?oV/^_M>6N[D^/ 7Hzl۹Ñb0x5k n?^!^֒-eͰP63X#ӆ8V G0bŇ@]&wtY*#w:VAg~HnXpsWX S=Ű>m𯛕JqGįueAˏCqt䇷oO߼?L׸|#0>_B"`j n4WoC[Mc{Ӧf_]vo* v+0-b ??^}~=?>}fnro\}CzEDfx&GKB%Z&Φ{ keEJbm QHU7cq'Ra*ʊ{ЁL <c%,PKj00DJ!XO8܈ӯW'\GE0%Gk9Jcb~gR %8 ɗT1bz0},AyijUȕ8ҸR>ǟ9L z88'9Iq'ͻaT;.SFGnaid b}Bn*9?G*:wRZշʧ|JqB9,~f"֕IMϢ?IҌP`ש>).^ECYQUSh_#;)w.u+BIڏzhtq2 (ΠB(Kd:A{VR4ml yVU]w3.}o3~;~7֚i7yy=jdx}Q)DV ̳rYknՠ2ۭZ5L%{kYp1n~cn ~&wPp reu{uE * ;W!WCgWĐ7? WC}sqjt՟Wj7j.'wW B y6Bq[fPx\]vSZ*3 jȍa\AwPd⊐OԾL!Pq5TqDwsO/Z 7=熻 uڝ ||Zg/^Y~=/Zd@v2O,j&kvpAHĹzYpq5Tz q䂟])aipԪ *]9ʱՉpjՐ2 [T:qY>p`א(j*WZ*`g=4 +Vw\!K qWCNǭj;<z{iD é"֜WiPll]+Cq'j>C7efPKPfKj r+q\A*Ew\%h5 w_c1wB3ϙsF<əs`t!> sQIJsӜ(xm?سu o7+ŨUr٨4M%78K%7ԆJnl3WrgXɉ:]7“7:e=% *+t]C p>xZ2[P_;K\9Gp5WCYp5Ԇu\ wqu>NT{3<;8<ʭ.Ep"pbO!Wh\ n*vg9|iPYO.hgLsВb++U^K4&ef2qz+feTϸ\wwurt'jȕip5Ժj~;^W(z?}*?ܼMP^GwW-_>iY/ W݅α`?@~ߌ]IRj6 Ƽ^}HW?`}wſW=4%LbDkyJSy#Nx9pg60_L?zv_N̈́}MԾ3-_]Uy=o_ɪq'^qa>*@; rdn3pE ~ }uk, -z?=|=/c"_~hiz͏\IzŘhQZCxcRLvYm2b5[5utgf),6465foх6fagHI}ʹRLj{S]*O};u6 j;iatvV񭛊0cN= iM>[V ڜBGJ PZZj Jllb8ׂVYB3jr6"LZtx=@.޾GXcԻjZRV9IiYI͘#$BGCSLf c'r ٵ\%)KN8_{""% -]߭i116Wvlhǭ'Da(͹c@&? 6Υa1S=^bIC'Z1jCuh4QORim6KE-YXnj(ϠdI]YbqO/Ss.>G%1.j*ujޥTRm9ӝz9%x}zs_ư&Kѻ)E$K -jI!a##~B_as€b1XrՐR84鸑A@cFm/fĥ&.ɉFŘ"ySt1&>aJ8ao\>~j.ƲEzf/?ׂ3+r@!ZI|txK0upq/:Rn:@Gֱ*G$W{H :XF SQdqQ|ɪbQBgąs՜`$\Pd"UA楂YLfdx\xѺ= KH Aɣ2%X`s2:vdouXTu~U' ;4%PU;5Mȃۆj"ZATDqaOﮮ wy4ԇ\l詌 )c%Vi,!zoe{j"!%:]%/sI`];X Xј n"j s[O hg.ɘ+y rnaz1^aBk=&Vi QW#{ˎ;EQ0%X!Xcg\, !tVLMbM1c4,?=(F;HRE *{Ģ򠰪 !nAYY RC9Qbe) sE>vڳ6@\K  " T @!" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@/QDŽb`A1 F Z ڣ@\B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D}H A8*$l \ˏ $(D*"K$P1-" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@nh=lkfu]BMK\})iw?z  rFp xK j:\7=ZMƵQa+q%P\;TguD,шB" J*Tj;WLӍ_F2PL`'B x ~Lj`T(҇sq= `Ŏ´DW]};=PjBoK+SN>CY_P|ty :{>_co_z,hyx+iiը))㍍0gR֑ΞPiu$y+QJ,lR,KbVz䔹/ҥjW|X26tڥXմegإ'dΑeû]vlyAZ~ѽl _E{i{.٢a/zq*}Q] }l/ j [bV)^2޶[+$|69ǺTP vj?[sz|gt>sq^wB{%2S ن XJ%SυHQMbIPNBv$ K`j\,.Z[XqwwbWl_b( zoQQU}-tg[G"J<0 &Y d 5+#9iw\Б94JTJb+/2+dCGOBFichv6c72֛MI K؟wHUg֤e$MڏȣĽ"&X9NyY$hFEm=]]j7WveTrݼGhzPox:K+'X-c#}4YA}8K`ʖX_) ߆Andϥu۹y]דf?9nٸZ׫ ,ftMz%=j>խwHOab[C[O긁KU8Ͽf ܣuöãD/zqvrU-ܸ ciIgT;͕Ѱ)Mr6ecbdg/e|uZ#ⲱ7ێ8e]\NFDIMf,wYbWϫSS16'9ko8q,r&!%.ZJDJV[ ")sQA)^&성WG%co/UzqƞeEYpAQhNۇNoa6n@U0Mg߸ĎY8WNjQF3XM2Jw9 a&"hʽJlnx0 sqgElrCIDll;blf&aU.6=Kȹ_bIWLCڽqǁH>ڨn89Xă/@mRjd1 \<6R\6KzY* ЀRQmHP,1( aŖ ;Iڹao/~sV}Ǿ,)JD86rWX"#l"g`uLNd$e9D &Xw~Q}SԣD썜L8uH޸d_{ "mOF. (6{~̊yc=eŁUN)ޏe`&%@DrPaoq('A-j6{8`rrTkdPܙr 7r_ko`_M&!ch>Aަh-vZOdY*i9d@esgB!WcI&ZГ*da2QXI%80OfvكeJdF+/yj/х.tt˥ ):rSXHOJ$)R`VßHeU-8z?yx=m,|Su ^uK"64+Um۫. \Lҵ*(]AtՖ^= g=E펗oͦB(YAyn;5/ _ygPJP|U?^y !ukyvSqj,eZuǚϋo Pg]kX[t5>IJc+\X6ze5/j2.ǮN6jXDdlEy2pWV0V&S%Zc vֻf5bdEKֻT:ڒAt2fx\5Z1ޅ(y\ge) 0oc0"C&Bf/$c.c#+d'aMۿִB( KJn{7_Գ*%gƔ,I<[,0Or&'f.>[2m*æa)N BrL # Qjf}&y+kƻp^«KҟNfڟܭ7.ޱS%p+t?aifNrp3yW h)h :ףqlMΫT&xqZ|> 'Ԁ!P2QB9a'VSNt.d:xqƶEU-Y{Y0i= vm6A)Uocs`>4ީy\F4%i{>HrU%ӆKED9: ;&:Q蕮]߯a8+);.ը7&>umZ퓿6~u_XU#ZŅ8x[ncԓ׳~9ÛZbcKBobS3lc3ͬ2KX1ՓF>Q,zlst1mvclouMn+-`iuZiFJ/P$:ݭT-*me}<3`q/~˿/|_?{Wƍe1/9%,"-1h/V8hSdMs\b$ba&ȓ \.ϯ}oigzJ-8gGCQ9[~k MݚD | V6[{hXݷ6faHʷ~vNC9LVx/&Y" ] .Bi hF T/ǚBP-lK&%@퓄I}tWMe#v HX֌=h;qUP%̹sZH^(W{aIyU={hцÊZ^1P%Ǵ2;SW5#lu"2o])Ag:L-bfelX"=d<<ʳŕYջmegmZsջwQՕ@8+$S64IAz .HRdY)O \g5D`{n1dὡHFvJ%蚂8d6U9?lGz) ;~׹苛Gpo= `-{^{s&Cv|QӗuMQY|5 uҠrPd*@LJ*3LfVΉaa˳aej'VӺl|5$$ *f0h(yࣵ"#JW@bC%ߋ*{$O̴*'[o]oЇ5}6ֵ|@|Lg4Aq_/|PC2gr sYUFCar\6L(LarEKGP`")-)!ڌdu1D,2ɕ,dNFH$dgi(MG0TLĜ2HwB£0%Cf 4t)Ҭ8oU^jωy KcL* ˒H@r@%0т|DyXM;gk巓kjkcfPx@e4ɜ]C)l$B#*ͪOъ&v`lf$guEwJDc?>ܺ;+3| nxp ~v{eմR k:dĽsG&'+xkfDcQGI8~7;~m7;wi_܎b> J>4躲d~!`pkn.ۺ~7?O>=>_b1J{G7۞5}0fOF7͞t&SB;?qݺJLJK#PQ-pJQ^Bk{Qނr=/{1Ey5(\%EhHLh**`/DmS&Fѩgpn;8@LjfEk"7:مdӆ鉧ŞzRO<_䑎)&dtSL6% @d =8/f]Lꙧj4Fe_m$;@ݶֈ噙h&31Q1";861dʲ?*0Q)䀪Vfד2gwM"(jڲ҅! ~E5+qW;LEKdԌD єFǁy㌣_0M\,{ߟIX?9-OS}}h֢jM׮RxojqO>ѽ![Kǟ:>}o/LqQ2I#wnd5>/C7M]+=Ih]nkQ6wfA|z+`S-mK.wkIdYhv=ȨMn>׾J=f'G[|!`_Zɺwm-Kf[^ncՅ$n.t߳gh~D4"T?VzOue8,=k} {:+_vq`׭>N7n?'?ʹE[#Ώp޼4Ŝ~ÐTlzbL-Hn0=]tA0E׮~ul<}ͻלIfaۋ O )[^⪧U蚸]X$t>LpZ7QLȍu{fY ~Нś[yxg~Q|huHeWٱ4bL"=I@.d̠0r>~i<1бYG@<(ޢ 1K8&)2`8 Ù;X+lޘ V*LȲ^dJ25Q49}b%WZHFRrJdl6g$Ι$6 &R=ZAϹYK#֖->vM>7Ia.:::!n\dIx5n>& A C}ˁz0:49١aANX[Xܴ9QZ\X(H褑 *nsNtQ %'Hˆїef"`vY;iL"섍1 A4Z*1YREg}*P;p*Q~ܰDl .Ij^n2<Їrz)֞׏ǒօr,XsKkts՝6;z>Q=ku3US*UkLEWwZyp|RxFZa:DW0v֠6Xgj R Jz@2x_wb3tUr*hm;]WWHWhuղ3tUvDc3mR螮.h.1+Ew ]ZtUPꫡ+c׫ t`LW= |S<5#qj0 \UUa~a$[$BkbrDTu8 kL5*YtJ4{ZS cFdi+NFi`Gռ*T P뫣Fwf5d`lfWମ*)^]AzV{=`FzY\A`[)%|M#Diz,E/nG{5xYu,R 0JWL_zf2_G4(Q'^I߼PI@_~|cK.L9C~} Wrm7 JczC 9L.;`a]+('v" }K+ESw0\RB)J # ]iJv,p(̤ꪠD2NFuUvg猪;!JzDB)-tPB9^ ]v Zh*(uyte2Bt ȮUAtUPtu;v~$rgj/8]rn]=Jtuhs`VU {M~ж 5ˡ+CtU[" : ZNWQ=]]$] QF/~ $[N:٥V%"z! 2+ǜ[.\.7$FZҊo/ +C;cZUuCA)u.P7te3]^>Y~uWSw|叟nnbVqMWT1?C~Sn4#I~u˗:"x;ZtocMPZ~nLVW/_X!X)3.gZɑ"p 6ߏy6 vprEPdcedɫ= OQKܲeOckMVz^hGQ9!]r\BkBRhV҃\k\ʔ,ԩtw97՝` E/m>D=,2+& 0`>(O$nct抣 &|}er S2k1F3R70ޯ)U%~o,.sBÖTͶͣ1-vl]7^,cufW~B0қ"}9hWo1):k! hZuˢ]kc"H%W.1C2tJy0`2 Fd dSYa&DrH P*K_M|T NrbPg UDz9erFgǽ9#}b"Hlol}iRy\4҅jB*&$?.9ifbbQ]ȹMA'ʹLXH!L8ԋ$- yQy\5Qøs1e&TeY܈Dz+uDGhAb!KPMSܥ"?emnA]MُWmh]LP~j7;Y}sE}?p6 #U8 +4p$cIi(hT_RL>XzKܶbφA$/SeŇG>lvS%)J^R\|9KGQ 0߯? j?_眻!zG>w06o: Ppy2A\%_83ܠ9; 5侙>s^Jc g/;xx#>/7lYJ{HJ|O/TLhxӬ ,%-!&rJ)T(.'K=KYGEgYgqc@`mV$/K˒AddZ9$eGRr; 29DqB4^1jx5t(Ylx|SN.̗p7S]쉗n9]1=^}% _ऽJډ\[)i/Js%Jﰤ]zcEq:g%Eߜ`m*e$T!Z[EitXoc>2~rR~ROA>h9[2b 6hZՌN} B %]T*Uʒ D ƒ-/D'w Y} s`=4@h!z,Jg`8R>FðV_6ofv 7d].Bq7֦/zBcZw8`WL/JYh+ F% &d@t*Z`N //ظW bx Š.uRR,X2 M1/e^hGI! <;U ܪ,MDTr,)'q1t6GZbJ*c6hW+h=e_iܤJS*"ojtiA)yCKKLi<}4:hm/?'ʿKɪȷu.a/QҸ6>?Af/'w֣yl|aۺGσ8d;rL+wr_Kߒ2m]~L5>d VcQ1L\Gmhɉn[:R졣,iJK+mۖbCIj|Vernb+if?cNjP<,F*̘_&Liˊ! К4?&#\#; 8M 3}=K׸Rz3oSݎ4-rZ&&wubzhYؾwj6todn ۋ'&)ULТ~^a»@z&n(w 46]k7gAw`,֟Vff76Th{LQ69Irnۯ 3)1U*P΄ߒ4A{DJ13.;g KRZ:Zcs4i Ǽ&: 9-}+lo=4~iD^Ͽ闋3dqHG2SorrrmE0 ,+5 =1u/ao-!&/b6P솁+7pOfߐDԓQZZ69JK|TJ~6:0ttNiU*W*2_|` q:q+cs;}gdޥ(uD JT .g6FZA&%յf쌜;ҙ.3ԅǻӅׅ'f# f\m}_٨ny Fɗϼ~aYRVLl&I!I@ ]hJ^79SI'i(!)ڔFQhG+mn#9_VuPlȒlM3ZF"D#Y} `D JCt*W/3CJGt`ZD"rR5؍xLCAVǁy':~. Rʂ1 XS!IpRK‰ Bc&L:rC؆Vqhr`HRp4@[ )G(XXA06&v<,,M}7C7x(>y"{YXF4v.1R5 ::dQ[Zad`(@E0pz ",iHȘ!LR#1saDlLOL\鬳U/.҆qtB 9\{e`Q5jn)NL p!pPq(xH-!˰4-ANpy?*']x?VO`PR1;5țQy\O_8*~ 8"/Rwf"P ࣟx~+2}<{2(?Nɺ]=!\47=QEd&?fT&~-~-+V^&f+H\-se0rlۥÑl4|1/a{x]Km-km W6Xރ0#X6~<~zv|09\_*A[;u}>HԢY& ay&d# ]H@Dp /`d!cWFʭ@݈xB-؇ 5|; x  Fh0N3)`t&㨑ad8̠N;ؚ5{sHnv7vVu^Oejp ?̓,7Cs6&z sZD)3lnY& ZnQDtҢ qlxg%Uˈ# T (]yF/c\8M(rnak $ -J)~KvJg;9B3p k Vdݎե' uMݷh/Hg&Q'>>X4N}h/&$J Dr9l"4Gs˙R!WQfVŒVMyb& bI( 3XJͰ@(\!|R ǃe}`c@@BR9Ibemo}3kOC~w9,2mQ'p<<7*}!ۍϫtNgT&;8yPLSĊPˌ a;1m:zf=~\LoΡtT:/Kos4R̖P<y4 d%Wixo|Ux{Ulj^*y=$o}}͇:W}e6zVo]NyF7`Xu뢻ȶZ(_/󘶻_Fݟ$sauRpݼm>-is~-4 fOfrɴB0~ր5liP93`Dbyg?a[ ׈=,[\HSc耹GIҁPFPqSoB3jwB*}wr6(~a\I]XޗE|xW3)P B`I"Dl( рBLA&R^/BH5h^FKp3BbO4祤"Q"!#G h6iٮ UXwI;8%zTh:\c.:ժ-b[c^Y u7]vՈZu( LXk%q)iK֡$-u(Iyuf0-Sx@|<*bth~}> R??E)kIXlag0"8+Y2IW`{IάBKGߨj۬wquWePj7]Hs0 GTЏ~g`+e T;spVGG+qT)C%>]RjkH XNP}b\Gjih -sT^8tk}(F@cZ?"}uvHd~+4S~]OfX~eoKXR00Ғ]W֔UQj=I͠\7OUZX'CdWp5Var\-IymsjR3DJj^E~o\%qYk*IաUR!\)0-+"JNҲϿ\}?pvzK_gWw pu7q&-OWwÂ+uR\;8]UXUWҶUVC$\=C"HP-[W 05p}Tz/J tpWpvŮ.alfFގ,<B(`JWB*#xp'P;޽= pJ1ER/g)E|/(" &zîZ zgw>n/HAĒ= ^cvּJTz^r,\(>k!,CcD^d,a0wxqNx1ǜ`\jM7$.'m IZy2IJ: yÂ6"'5n \UC$%\=C-JyɶUCBJ3+A&Ep5pUi "WIJ"?G6]thZWI\W -=|JR.\=RD!Ѧ++^m12Kovp|JS."2[c ;j~J8(#Ϯ$0O}~7q5Z<\MJv`gWp;{E6Dž~2.ԇPK .fF3kT ayΠFm fbx.CNLr0vo$ʚ L,<`g tPPj?fGsFm#^r# $g\Kr1އn,4K7F@ϷbQI78Ŝ9s˭2|Iz0C^\qXt?.mdUS[5 ?\@rbMޖՄ?{W6`,0Ъ0f,M`#Sj8)AKt,81YzU=z X1Nc}j:R h,}/Dc2[3jD>{M\c #o׍4L>ORHN8ɼ 5v/wj7Mݸ!&k#MnU@z7],+t}YKq=V4I /Z”>ޡ քX湎4^% 9-1]ѕiM-G[C݃5 +74X8?eŠ 5a/jEBmc\I{<T EjI(ZQA] qTuqmrtod[,L2~BOGd>C|JTru|zOS5?s*,ԓQW\JO*5Gu'QJa*U~j0Ǣ^+[`Sڌ>ܹb^ kz,;H(>(VR*H&&;g˭:(٨:u@|Xrt^UuܷDs pVDid1^ D.<`'NLy{P-l< 3LN)DK)W^4%,>ù/NWf҇7\l9sKqEskVq-2&ɬ2sTpGڢ~\ڳh8 >>sńO:wFߝ-$q\9݉Vr (6v}W P1y[n3 (bQ *-Rav 6>ƀ 9)ދ_4"|rFn,! ߝ_$r+U‹tw%Qȯ Q`|S}1W?&-fZKC_o@wݓ Y(oM`0w8(IPV:BAyN2 ͨ %h?:Ũj LCU;Ep` CG,"}12vBYL^jʈhA #( #2&"}li_THM`j|T|6X\ ȱ*JfryZ^Xi-ms#  u8Ÿl܈x3 Kohtq fkI~y}6-0`T+729ؑIgig,wѧ^1*䀻`v}Z<^:D!b6OAb"Oih>W MSG`Ku Y3GyZ`f\0].3g"n* 1֧|TF; wqړ=|WMP#HӕZI`Voa}T+ ɘ:T 1]>a{q;^UDBW{&w5KmH p!O=\$`jS38ƞg"=MIX8evnQR@%< y1&팜eM.N~e#ҙ(SL;]DH ;gR̠`őSۿij#H1@ 5[) [\@MD).D F@9 RǂT`Ta$7Pc)J2."}G 0W@ Όy.A MIi  y$p^\MbD YԍSh)yWcwgì G StL0F+ܯj| % !w g)9mvS-8U t*9?ܳ/h Ww 0R'SHD01:(S3K;>@^@0*0`K96 KV+ kX;_9Oedr.%{cS(Em/uqy)c0iE@,* noCVߝ8&9؉ 0E Fy5CƜ{eZ_o坟/糋ŅWM`Ē-.AR\Fp_I_Ջ; ׍TAX9G\7 Y; r0Mfy BC>*Ө(^W =]96Li+G%h$׍nG~d$ ́}~MiH{HjOqoTiYVwMxyy `ǿ::^o_w?|u^%X bkHeyHm/܃;CCS0^54UЌt9łՔ׌{W|]O\mr C痹~0uxo<ҋl|KlrW F袠[Ti0]peAkE>Zo7=(Lqno_`/6G7Ľ|$>G`-u*ֲt=`d!cW!an[B<ǣ r@Z!|pfvM  i4_ ib:jqj02vPiN'ta6ɾyU{8Cg^Lɬ)H4I (?~AO}@S$qkvV$^S$3E٦UQѭ2^- 8c &UaDYO"F.H^F¨w*NcF1Zl5*h2:ڀmbHn/! j@❄%3rKJ qjr\bW{G{\SY>g5&Ow"C/]dJx'j8މJ}s,-@w!e_j6 Ѕ,5[!-ɝ2Uo!ȕ&Q5AfXz{j]aGjLqEI_z: Y"vY/얮V}ףVm.&r#iӑ$msשOjz dK>Zgкj7w˙OtZR9l`7f!̠enZ;=/o|᬴r3?Xc9y@Ξ.w薎epsmozyؚ5WM۞]Sy沛mjmg9P5ڡ,6ZXy-E@N8)F5Nz, SjK K¹ &~_ġa9$L?HMLI|J!,E N&x~'9۝x4y]5* B yŘΫy@ VZfh$y()$) 1z k|pf ZR$I`` !m+(@) p`TKe,)#EC5 ciRT,WZ4PDܐEǸ<:%2; !)I垭 edl*o<=p_O+V]ju0|p}P~`>WX)L>YkE$Z(\ehO#?{׶HdEǝ*(CO/Acfew ^]rInIS)[r9-Y,_38<dDC=r=GnA3}]gw"|~'Z#x_nrj8&΁1>lb >iWFňƀ!:&%4ֲNI/LJ4=&חxFhhCԮ,_hEMIv4Z6 rݻK߫_Ǔ/EM?< Cᕰ}v/bF,{lS.κ;iݝ[Bw3Vm<..:L=mκB הt6 -L\؀9`5 zp=gu'Y^ņkhR+ΙAp>8($3LsɩRa&17 2d\J42iXxoW0:e`)*ȕ)b5q \~)}矮hy^~Q,e}i= |n we~Xe |@|S:_|}G׾#[z# iU *pZ7  ֢LZ}c?G{яA}QRέb6[Cлȵ Uv!Y:f~|3ŕGiR)uz(A]mdG?MޡGu+w88^V=>8z}`u [* -L+˷d@x=60dT'ӯmTt=d؞G.t춣O]GvKK-wrD1` TA4#b<*.%zw[g y=m|g{fFÞE̺HԙyzýSlCm p5ܩlp"E/ˌ=ԵGBf,̎E)Jd-l2*/4r)M9dݨ⤞iNTڒ 縓C[#QAں:d! 3%HjQ W ձQ67ṿ.*QOw_=Vsy8&*uB>Prmգ?]_Lҧɸ}3F&(67osJ ȯ1!׏CY@ȫ\u{]FwN28pY9{ vر%/nyKwJ45Ŵ|ӭ?̮̂e&aL"--Kyw@3ZeL53de !+a76f&%efE>XR:{ƥXM+VUӄӲWrZovf3|;LFVR.%EJj@C,G5JX1r1XF%)=MAy(o^H%iZgKf?j0\`Ů89}u4b6LK˦-Yj9y+:2'4&q f/mӲu22*Nӭ\Na%ZZJ"DDgy_^%霉e-JFC~v|sg0_!a!co u\o֮K=nV+@DLLiůpm@b ̚?üӠF4BiޠWq6ӿhLL|v8ౝy<v{{g%%HWN$%U\ɑ/A:d 1ARRR2 Ì7ܦ=༷1qYVgNDL=W UCXM-<}$f^G\2”9F&DTS"'*(XfY Y" 6$ ɖ%/D!'%ф)` reZ3jW'%fEMmfJ4MKt?c6OvG32O^M. N_}};99oŸˍmOm4i!gZr99(,ԭqڦRLR6ZCdu׍:34m$DE8D"I%5 @ Ns [eD&?$z՞pqxLjdW\ʸ(\pD{iqǬ7Fm.%r]@fRD​cjXv=@؆ŋx'l%Cp~@.e0]*G _Fod֤XPzo|M=`xhM)7u+}L>M`:!}S9ep=MOW0x??MZ+2iRj~{b7I/Z-S nkܜ}dߛ7u6&>(4/'B"]OC.fA!t6V:}QW/Č-+@E-8Lߙ_Odz F"vz2ވڶkJɇeF1O<uTίJҜ\Z+H{*peeD W$;IÕ9ū/2%@cւ+YaWc\evy\)T\ \qOR;\)5p JHoz~qY;5-}4s'8'yr=oiV#&SVo[F@ܥGif_ǡn4xmBBNe;?[U/"x5J݆[va|~6~y:CH/x GLI8c0+ft2_4.wߑ #g`@Zs)o?ṍbxCX E\-O7i=v@R \yBpEؓ"ԧWEZÎVp >Z\FîH\p*pEJmHJeWBeorr^^hsuGWn:+Ӓ.)9>-L]_9_-ew:roAiU \M%mFE,n7`tHG7ˑ8Wllɣe˞F"U|Uֳzdav99rJ9)%H'U) ~fi&d%(Zz$/BqU/8KCK4=kJfaWÀW:$꧌&uԟh=h=3h}\~|kP;ZfAz!H]H9m:hϳ6FE ]H!j-F,S>XZ8)CH#V2@w]h.OtDn}c.h?^ΗǧxNf~Cl9}c/c._! ӬBE&+rЎK=3;O ^CJDd9xx Dɹqe/wx `x W*IY$d +:˃$c.,<*tQDbǒxGFCIss㹒)I,=G?Hs,.㌐|x02LZA-p{?EgR{s)wf/\d͂,.w*Ȝ%J'٢!<-\/1EC(9O3AA8)& :Cb&!M(K煵\Eo< yH]8/NS2푈qĒsR3Hg ƌF,HI!}T)U`VsRո( ӷ̺-zxv[Q80QE5@jXDVK5xa]ЃfVb_/\[gudط|`=7)+ R4.%X߹xH2)]5#Y)Dۮftmtʴv?~>[ ٖ3j4`hQ8L?[ SZufIʟ]mhv+NޡGuv^: `Uv}oX.aKfiv~L5f d7cR1LJI[~{[=Hێ:ty1 OGE! e1EbG$ZBOO+u6-\>|6YgaO]aQԗӏԛS72{6i7iLڝ_s^l|6{սeHzW^L}Q[πovaM+lRM.7tt$o|ỉ]!RQu{[ys)nz6Ob:ҢMsu1LRHSw`:8Κج_wjv1{6r\N;@d5<0Ib2lu԰}7ΨM܎y)4[l]蹻g5{|V[a߻[jly|vzh4D@<A ZʎEKYM3.dR{a$W C!#z_Ȃ> P!z+<ǝޢGA$B.8'1TjBFb( C>kfi~uy,ߋwMQ:<x;R[C'JvM~X\ /?#/0΅/|ɢfi f?]eZba2ޏ`yx6!6|8ۯi?r՗\߷?~FeĕiN|Reqn-G97y@ppz.}A Ps")k%!=ġ]m'=bo廥W& Q~iE篗rZ>];l!|_?Rq5ueFLboDmV֝n]O7^N'I.OG8@.WI *d2\61DHos xPkEֹ!;ONȑI%I \9~KVAj=m4z13*I3 ."`KɚWϒ PR] *4=~ 5*zȗ|p×F3w\"IRSrրDߤ9 n‚)dDzYӦ vf=&_.\ZCk |;7O.ڃ7yZj6m)G‘ՠR m{}VAQ9IT7l㕷MJx:I8(:/|zN9\() )52!J,OK5D<3a e\+(ȏE7?q/|^3 $.U=dc){-*A=`CE,~4..;o-\WvE#7fS"+huyl19N7{Pr5P*:HVʥr" TAycP9=IɥU#)E/I༷\Ef1Z9M249[+# Ok:yE %L P}ș62!B%&2bM9 @e$3V0=ن2mY[LN$d J2V#aFMmM!4-ؙܴ/[}7{$W.6fQX}}g=9oCs=f˭s ;*vr&Ao-\Ǡ(RqaRP[K1<"$gZ۸2/{[@*UɮSdSqR%.#ԉm D&i;g4F~]# g[ ʂ2a \z]>:EJ8.j[3F㚱=[5]gl kYN< u%EѲ^^]9#FF(%xiqǬ7C x?2KRLJL]}чqǾC>܁ {$$K|G>zx1q) 'NKQ*һA凞B u_Fey 5Ryci,rG~;Bar2?Ҿ#T:r@1k5Z1ޅ(EBgeQ"˷*L_LNŞa髋 'V%YCxtXJ ڹcTv@sJX"VN87 I,IrQ.A+VWfT$Vg _PVEw{ XK9@qK] C6:vI`V€'ũ^ 5Fmm251a@eXSw؞mDnP\k7'KxʆN9cQDD D& a&;Q-0! {\J9Q&3x!d"}մyBx-m_ٴFΚrMevs\tY(jF:dzǏTv>*: ފe6y@' f!qc AHڂN6g$ 1CAsa9s:_u`v>ڶk7^o) r:Ƣ hMh(&)!hT½ Sa(bܷi&Rfo[º}Z) dȌ̴9|ČAǒW? BO5#HdcEXR. nLq@2ey39q:AjY6lN#" Fʴax dHCKt1BRߡbDьwᢠ«+_Oq-ӱS(NhO[G ${fjc'_Wni%/R5=)o  ;C5􃬩7!pQAlGgT1@q` ?boD8vQǣ1)ԷF䐸60RўEU^@ 'W.8'7y녳dtpP |:9+G3Gs:urWWQ"Ͳ4mB'˫jA!r!tzwhXiv q}087խ1M߮&59yUˆ=Hlq r|2.vp ?܌r/OWdqeO(rU7|e7 n5 mLah,]|8迟Mhg7~`pve=lz]5V:*i&#cIu(MߌR5UQ'p prwUFįz*ΉBۗ߼˫}}ܜ˳ׯ/4J$Iy+ނ;]w-Y]#os __+~ ~0 - !T? ~r~xŮIW $旓QM4T#,mUQ!)Ym4/شr)p+2A|tۚ${IHdXk+RAN X"S s.G6RƣҪ5-r>[/s9:}5u2$Ar_%҂>Fҳ$tm7N#z$ϧy8=[T ˱ Smi/4~:}S7`P$3&:^]5ս?Wc5=9椖"r:`Yzfgfxw5.I-g*i厞ʑ ~О!\!dҙ+U0 taf p&-;oC_4rAX"RzAe$H2p5Ƅ̽kmnjFXc-O7LrHg^KpsI=>?`!9v&!AjM84 R_eF$P k0hL`F<Ev3uvTw8(pw-d ApYɹTt᜶iϣKA|(é $e F+^NxtFX/ FnOG [&6xmF+X"bm:/Ld3PL5?S}jdqw`i []X䝵roct2!(H1Bm svLX"K^3GbV%8XJ@Yu&Yq6>`r*sp$T9Kvg 8?;Xc  ǫqE, Y9eȲe}2MiʌZ@y-SԵ`<֘96pV/'c;!}E.&w9҈ަNj&VjwR2)aTAu0YVg9d gWsղ9k"i5^P=mQb OG7^56VWB7]Z0c"\lEvMo7/oדns0z;dgiެY}hzﴼxƧNkPfO>Yy_-6HS8-ڡG~ vΚ^]7vEjqo󤹝ag)F ?6\NaHX!؃ #rVzC ]fd`.Bf V"*N]}6 \z8f0gWWFVϬ>\W]})tjӥYiH]qF%F]rE]jwf{uUSW_\8݌{If٫D?s~ ܀a&jr1M<AbϽ G^h&I8i\-G[  r4@4ۓk#=b(m6^0o ,hDYs* LUPtU5]]OSMNĖLXw߼8^B_.T:u9bY͞__,V~ފc8v @̹WF:`10[{mi|@u2۬2-$VyzipF-!xOxju0;,G'PQJ ;eS!՛%ڭ9OKyU<̦X1y*)3U˯xjEOL#kk`L %B5? $'2 (jw$WӗhP F]r<jd۫Biѩ/G]Ip `B{uU\4"_HؔM崎FuG-aCQ <^T\ ظT7N?{李 (Q Wu̾(x-\K%zsY0c|=kf{b: ߎBnnæu؂R܌f4 Ž>Y?!՘MHV;nbM3#Y`Aet0itވաmތVDoA9.cB{μ`R,c&m"йM)B3rN NkvmmG1S) xdxb6AV^#[#SdR앰$c@[ԮG:%ުNALBI]u^]\mPnOy_.Jօ^C8O3> q2}E bc 뿍s͑W߾M>O_'P6lygK>0)T8_q|wV-Cؐ-wem[ Y\#BۮfvsoHp>/B*c\Զrho=:K8 &yO^Y29.DN=:ԣOSfd #Ԥ轍61 Q##"D:bLE!I@Iq1ZOu42QǸָDphiJ#AT"#gKj VֳgV]:2|nw^%zUX0-]1=^}!(;BdOdm/iN>aJqOP,3(-R(O$xsp6P%X*I/MR)04(t %p) bTp`VHu ᱵ 9{̜lٔNsrb_x:Z`.nϣ!";~oyO7V2vg!8P*X5Q4BW /@8!"hLH<1-dTQùW5[!fMѢ*H\5 49 P7 NΤ8;qT9* Y@gC1rz Y WF$/h_<=mx& ,jc=/UN9}fœ$(7Y| VIb#Z%ġnN!q۔/!>,O3/Bz:UOy;-`Se7A p@Xnj.8Ppq{/}iND( kW/@a(1$m,$B'<1*}ZR_LO}#ТCf%CCz(*mٍnAd!TBg^pP9e!p= 4;ON| TĐ`86E)|wi/" eg ͦϺ*?/p4IeCD7MNbя'(-Ji[A>\ He&fnJ6%9<>ڝ9OW_}>Y:6w~VdBl7$?Un4YVQx@vm=a?]e]7CdSzloL46Cb=.EE/B1Y@vwqF3nqXϴMBWQzkQjIqA@1 {WDe鼪Tjkµ. O^WrRޞ}Pjj?'/c3/x֥L6^N'շ/ F'1 UUL4ᴲ:G]>7HO %GR͢4*_syRGf<*$SB 1S ESY,E2r#@bYA@aHG%J@2>$h)D< Bµ*N)}-|fcSIq+'Iqg7MqͪdQKcm/ς<9on4rguf&^vq%JVA.D%BpRn5V΀EὊImʖZBHxKG}-B2N AR-lB%UZ3#gf,Ubq.䤰.4.lelV'cӗC5"-i{q#'(J8pu >Ԕ(M0&@dCsZ6IhJTqA(ByPR3.8!(1O@q(VX##g<ڡ^gOYKՋ^d^֊ #+ͥ:a$\SJlͤ ^/C/>,a*lGEJ|GcW\1qAp} E?JDǥhd1ǪMzjУ nEłe**D#x:c < ԅB˾3uOud,I?1BQk!:P" 8@OD%Ze? 2agՎsQ0ar.(CS0$3JVPD"1W_X̔|[ >zz=vyy7b:4$sǂײ^7!15-w2wTGMDr]1T B1#^>2Mu([1NEumM3;n-eLȔִ{ޞ]kqQ^z~*0@ -.$Gٮ8G&=ɟA83]vM:\uN,PhCLOmt2Y>nheqL笌luȮY֊xxZg{\Hӭ?3`Ym L[?T-J&]] ;ѧ_(ӟݧ?$PJLO"'`꿽Ss(55ηkcj`%Wx{kx%?{WI\p_= ؞?y2VMK*֦R(%ev z+ee`Ds>,6]8ӸYrH?'oy{o<ꋖU=i\gi_a@xy1-(|#`⥰O':)܋.~?&򑠷S q J "@d%PUF7֔,fm8lUB{'T4Z_ +y+A;N>kg6qb>g8Nwy<~AkY5N؎=-?z;Ai_rLuQ4):ݠ!/B|+{QrcƲ'$>>E"EZ[ i c@MBWiֳF,4 Rr<1U@\e@>JsE ۹B "EZ `r#OtWp,ԧ$\'*nmң ol׷w߮s_ M^Y/ ٳ{+IK4] Hݔ.*MDHA!ksc nV3%gFӷHlQ$6^$g y08òNU 5#D)'cаIa!GAeT@fC=IUev6oj5[/NJ>۳f%HL tŸ?xK̪|&ۃW_̱bJ&ۈ 9n tR0 pz^z$?]/':Gu&'E66}Qe \@"$/'n't} ;RE&$.N=.3$t"\38F뛫~+hlJ?@gWݻǘbyV)TmIQ2i26d.^ȜC"fs%$u* T 6xB꒵@v*z] Ba`Kn/ZzT净';uO0onK[g_=˲ۧX]i+FH6C{ T.(Z]QK# ̩[5BiM``\'e>>&6]ܳevϭW.Gͯ⇼8[\=~zvaXKW7|fyts͞axo.nW6|:}{orV֕CݫjAߵ?lؽ鹺Z61A91uחzvm5/י_%s_i+Q}ϳq;ƐKFk\˿w+'`!ۀ̶!B1 #L?NooI=LIXޕ6"*BI`DvWkim1)4E;!?0xVwV'uvliJv߶㭋x-s$:=ktcl-}e淌+Z[:t%vMo[>`V@;v_ djXGbُu:iuGu`C[ Q$7>9r4 "D5IYfaje.aժRF6ZX(45wI{ ܐrV(ܝAX U>'㩻_4~i{9Mm2gE$T 2dQLnfBЪen*μf׏d)=)1bO $SydYde&U *3.D( ԓP3Z%t+cXS M݌?!'aњ:aN< K+Mw!mg{}ĺOK=huZPJZ(($<!\҂hՉ@0)6jc?ޫ"yẄ́;EܝaW.ӂCԂɗ5q(I‘Nz`@*` יCHWgHW(1C+l *\CC+F[ >*JΑ-t;Ηa, gu\}7Q g_&Wܖn˸>;Jۋ#-ud)+)b7.2^_V(םW\Oj/ P_V(ـ@M  4VHhAf#!.Ҥ<wR?'wW,vgM~<^l/x yBS򝐓vyi_}G5 wwހȓv-[ AymtCr(%wPQqWu*2+i8̮ 5]RHWgHW]1;U+CW;ҍHWsCrYZ CWCֈUEiG:G2 RDQ`ՃQW}ҩΐ,@ x@{Whm+#] ]EVb8.ءUEKw+J5yJ8Ђ@:]X+qp+QhסPBԕoړWͱB"57חvcӟ`uU2Y@(\TpM"a0>zqK澖K%aGdgźdD36$EjQDkѫ쐼z xqeB\!) E52o$m59J ȓp60OEwO[ѓ{ONjP vvʺ+MDW}+FG:C"KЃ P誢5w( e!6CW.& UtUQHUVmѫkVE|^^3R伃K[J[mi-%]X\T;%n?m`)/q7^lT6Z:dݪ]vWI_ӻ_8tJ)|Be| cX$_Ix؁mlPX+;4CE\Db`G,:p67أ/" (mhY-.њ}lh \ΉGIy0e@nFr, | IT&nE]ks5xK6[*EJmb, %,*3dP2RِSg!s<'ٴ$O[!iœ j`kfYTԎF"E+1x/IFϞaE39{}y.&C2lBmLiO! %[&!C #x=6ͳJIJDW'ZEÏRZ8VDlaO 儣Io/},>F%%j0L.fI6rQgx-;7[!@Uȡ(s"R=$mQh$W5 :}l|½Ύ}r<,Te~Es_#̗D:v-+tjD"I2/-E )U)rPdX$'db1JK"o!r"t;VCE"Kf`lEB VtgA@A^BWG3 ^Lʼm Pxe1m`"툙Kd-P]rdhZe!u k;-s kөXV]ko[v+Ǟ@i CI.ub5芔 6)QL *D[gEY{9fɭZ<ĪuUjJ|Y^4:-|OC |Y%H Vjx$;"38hG}`Q*YȩՏDC}^jqgE3ʃ d-Š.k Hb"Ud7zjX>wy40B,GSе 2"(QwPҧ{C^AJ./s&#ԘhG5t3E=`mz O%l-ѧ@hg]+y r.!z-bB bjXC-D{RhEeGc6t^ hwM@Gص? xLMF"h4$?<(Bjw7qVUp*yXT*T(@IY6bH!(6DUb|kM{J!=iˢ9i$dN'=ZMW:PCX2\'*mknJC` l) ʣLH1ˡ옅jIKPdpN %>uR@Ւ0Z Y\ yۨO]gF358F'gAA(ԋBw9.r] YMPzЈꆌkSNwK>8}O{/ͫ$8!Y a&tFۀg;%f]n%vlY4|vHłVto-ݼ~.j\_?:.~Л.s{j+^߮/_J_pyF]iX_]?<"_,ƌnvֳwiq{zYH3}&h" (_Z֣X|]llS,:D@ۉmepv= b'Ws9pzӝ 4 ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':tp`]:pC AfN ;Ҝ@gN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':CՓz秂L 3։;Gv] v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; \ʞ@]?N 'PЙw{@Bb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vӥ3ٳ.i)fy}_\7pߥڮ&)4JŎK8nK+ŸDhqPjƥs0.F0tDWlL솮ҹt(ϙΑ w*ul.k^%)K~/P[E1K_]Qzxy@^PrxGѷB,~l۷F6P h<3(y6"JyU &o?-$MiY.reם{%nnӺ_fwO5 H:}Wm*moDsBEO?~|v\oPAA s־`^i>FEO3` 7tsRެ柺n Oj=G`vtDWn ]NW@Е%Z舮3+Cp}7ȄvtR*3+#"^uCW7vCW@T:]JΐR!ꇮV5+Bӏ+ByE9UTѫ~ ͓A rtE(cЕ̮/F5Sqp4qDQ(GЕezhKT"O=~Z+'OWrj0]}Rў鈮Fz+BPtuteQ Knm "z\/ʝna+m3%Qj1eux~m}q38G9h;_Mʒ0Ef(!d[S%^j/ɿ|ŠδOb@'ᤱe\v^-/ ƌ8Zs>&dC!~ۅ-&8"m)HǨ[xD:묘OE28iJϿoJGM%V^*9B+9BWrgXv S<Z-'?D(V0]]Y '_~\c=BTkV5j'CW3޿¹ -| {wCPډѕ? t`I17A=lVzoeFr1_ !뷕lI'WӠ]_-v5\ׇܧ1mZ׷_?^zNDo^/6J1)F9h5L-͵'?>WNo_͋$οZF|b?w?gzKxuvQY{e ?`V{8)>|]iq@i`/A},[7߂as?v}˔a9һ&+%j%O5FۊRM@݂H;ꕵ^Rȋ7:} r#+lZw/μoIY7/b5)ګ K>j3TpCm-Y>gU`hU߃ٻ'CAAml2FY\֠ѷEGk\|ظyi;6(qwWw_% 7=6q rq7}~O׻Q};mm(wW8pyzkw_n.~X 5ZJҮhi-)gD ZjhIgl"%INJph^֤RjkF)զlZ}cr@HH)5}]ǪѽdZZIbC~\)Zm!R&ɗdFѩx166pij`sݶ1eh5 է7#9\#mRO0 Au&INA7_ۡAmG#V;H|$ Rsik1 ?".b qe1 YE|l*k=u( *Č4IE[x*7ԛq3v.JXxl6κ浅u1<Ψ2gS#_Z_zws__ ~zTr_4q3mAJ诿81fo7pㅑr0f6`i"0g ]tYʔ(|4?{ >heʳ2 b>dՂqɆwy^ p͇__%_?\K>0 N  @HL(^_ ̠ƪŕϵxu7W 8n5 >Vwl5@ҁCOԄ@o\z,àh~14'r;|sFzOa=ly+g6cRuڶ[ K&"uJ=.Kl$h.j7  ^dT@=8Z1_jgW OY: $7Mɨx;vۧ/oq͟7xi>"Ns;<B#p6l~r#OWx7\Nw׿2nXx!J |X:l[j!-̆au8Ƈ#xZA QWx|?@GD =nm9h~Hm''u6lPN%"W93J&jup-FIRtz-ᚨBB1N$JN\JcsM @q;9iok]qf$G%JE)6R. "E覕CR %T "*2x qRƂ&gz9(*8L&$P) y.t wA$AA" sNqE*W]p4r f9ڛ385$+kED#򠒓2V<'VLY{WecU*xwn jf| a$ی*[' gArwq> ,<0G7u?=lmGra%=S:_9(Si s9jTElfثpHq17{)Nt*(><dO1j,(5795֞ RO?q뤭ǽp>/F$D,S Ad墁!hF>ɬR/RkIы3^.dN$,2q IH%"'`DplT8ɒ E'8oL~S!%N!Dcǁ tCϲlLN† Mr,5 wW :Bj={tFHHa|Tgx ղDP`sVT(GeJᙓhA84G!We+~T -!aTT+HjS qQ"$fCԈE2MGM(D oG*DuBQ @YH 9@ą\2%ӽ)#xEEO%χ<$N!$5$b| pZOnw$пw!oi6\8WΧ$MgWqVanŭhbO̟Wk(J5Brb4ʎy2M;ƒĈ#߀ ữ 7'u>_ܸ3?zd0^_, !a{;[:Dq]1.Wv6-ً,2y}|#eM{E}\$g?N1[wQ;f9lY¸m56y9ӯ-W^O n[^nH<ҹy"ir;RVʈiC^ WS4m۟SLn{±#O O_VӔԃ[}Ktmdڢ sHo[cfP_i:=R}J2 _V~ڑAtiY}y ˟-1d4OL*.NLR\?vPT=ݳIg1[j\FKi]^<1I~2h򀖣oY{&10]ΘzMLLU1Ju_ް{YeMDdc-5':9|Y ee+U>rb,F9P!C>$cZP)'coq)b8;!>^P!7YW'<.΃x!z%I`Dk0^Hj)&yzE^ށ>Wu\w{qdmwVۅqaeU'RS{RFіjEgF_݌I} ^M'/ F'cTTV gSep<5 }nthu'}z(9z;oIR׻L~Jq,yHתQg`4+rk-vb%ϚK6+k+5q!W,SdcSTVDpH.)8j?zxZ}]tzC iU``v"6@۫0o" lO/?J H1T"gP)JߢWhJo[=Fe!%7 wV{GEr\ܘ>]>2q1B{Aʶ Wkv|=:G1F 5GF5Y.C ^}J7uϼEw>Hwj%>ϟZﶰb7-oHX#wӌ8{iF]PtqR'lF,w2s MZ|T-Re"y!mu.yLQ TJxb+'!v:.1nۗރ[ZZ=>JB:0"O*\KZyz;P՗ۋIP 6ſgW׻#yUN^3}#TH䩂?* h&n=v+)M+ᔫ&S"N2C_rٝGRXB\q)8,Zm6#+hù^:Bp-_#Ĩdbg18R$xpR!"e`u4)$$ ҎbSdw)P#T?ϑe0Ũ7ʇ|Jȵ0} kjȄH"F( X%7< | YA9Y\iS"hfNc6 xm Kbn: k*fWv}l[^j8@ "C 8D*,~Rz&Ip I$I* x'RIb!X̕S8!Q<,F~y(,;ӗJDZX"^"q׎ǃ jrD#^ fhsSSnE ;i$)k(!J ԍZ"-Xr獈**y09h*|p+,s@EWP.֫]u%E^X.^.rqgEWFp#koWվl:6Ajd;[3SNbFYSJQ̳&rblR@JʦV [IR%10\A"4HKmMەnc %5uh jDlb6:q⛖#C5a!y!T7%R5FMckyO&X'6Rc=6A;ۇ75#5ZŚ/hy:"W/īwW*˫nt%OXvp&@NѯeV5gC-8Չz<t~̽M>TyQ|2ZL 6!4(0J\w&$/LLOUϖrD*ؖJ,޽'|?\ǻ nyz;sY߽ r27w;(wC:3=1_-ڛ*Ac$mwl܉ʯ :''KD5jV+ѭ1-9sQkO~4}ƫu0Vۋwͻ]sƗ?Mg_^E7>nFoͺ‹?W׎EOn}`pWu9Mn+^'w52$Me;8]n>Шcmx(_gq3ׯk׿޾WoGqo`@=¿@'_}5ܢko/}yCw+ ~gyZmd̛ϟ8nz>Iuy8;񐃖]-sQͨYeq|2J&f *ffF!AɊG6fsrnE?h&:H߅Ċ'S ںFePMTfU?mB :nnm;H/lXR^5[="_h9*ɴ aRLLM%͓В)&P J*g:{9U}ֽĎLjYU=SƮ';Qo-q%V!=8j’y5}P{3&DR^>ZM2 S!*+lfAy䬡ߑ-[([^$g{B jAjPרRhr4DEm$:A ܋ ]b>՝ V6٬M*ֻI{rX*yӂ!ؿwcLyYoq}E9w'z&[V9$]!V?uhjq 'H^T +!7m3& ((jAPm Ѹdum' gsk Mi<ԮMNGq!4%blY-#똪/_wӫ w=lHx}oV{SSOgJv1?`rTFy1Q+{T>ڵo5״l[:a] M1Dj uBhZLE x Wzw//gŮ˲rM1Oٮ,CYdJJ :vFS]b( 22 A@6WazGRQPQҪr,G]}ut& )7k[ VR^PܴlV?.C̅{R_6M#M/v_FB\yvb=ܴ!LNN!銟_O?BUx>Osuyfuϔrf:~Hk!E+sMY'Un%':Q!D.bdF+(hh hg5RNn#BL78/ֵȿ꾃,cMdt:^{+uyi:\-+N^GmmezfY-zil(( (ö:FnppuC2_t:UjשSAt a+l)t%ht(؁4J҂Jc9ڕue6kb^6x6I7UJcy~QhE qt]~rg\\2r}}}gvMJ/ 2+8&ίE.L.$??ßZlB&5š*(UT6h.r^эwb-hjvۂ[EiѳѫHYѪBA4=L+5W]_N6<]<7?"֙ 0p;" R B[ p/EmV sjZTINC1t%pJ+FuAste5#/+ѮU}+A@WGHWz T]9q!C)t%h]!]b DW ..c 嚛y㡫P+l +ßNWr@WCW|%iWؗs2(pBWuJ0]}7tE[N=(ЃpNU7`7@ạ[f@W4ծSPQtbJvI7ϝt(A5õawd CQOUpm3ʀmIQUۚgNkß" osߍ/'tٞ{M֡CĄ} EzOJxR]f]QMEbBf퓄׮f=rҐNOXa[sSS;ԪSWUtܱmFOSYP{yV+Ƥ™{%"MqZd `2Xr ׁ.Œݒk;Kfy]]1`J+A{ Ptt#JJ)$ߩv%( tuteIARvU9t%pt%(@WHWRX] `c+N]!]y ʖbM \SL}t%(銔RP rA9tAI~zP'J]uLWaO8({Vt5v]5DW N#\kJ+A{Q@WHWUe +lM1t%p]1t%h;] JR]#]Y~{}F&=E ]Pk^??4*MK:pwn=b])z=Pn(rpzy+L*J+AuJPʂs${UYBW+#(:Br(DWHwGJ+F }+Aik᫂bJb+AߕZ]#]^] @ /$NW GHW6@IvʉBWvA9kGt[N=(Ns tphU7]/醲g: tԃpZӡ/vNWҹwZ a`CHzy:I^RIݤpfv0DL$>PE#pbZ[4R.J`EնӅ?@ o_؍7s^ Ǔ -(i飤iBI~rA)RJК J]!]YASr+kT1t%h5%C )T)thAӕ3ҕw<DW^)bJb}+Ap#,JrA)ڟDW6(A 1V=0b>{uѕս7n|?tz}q'{uhU7nh́n(mϲtu;P] `+KX ]1Z}wPj?ҕ:h׿-z['2`u-҅Zb6W3g;Kb{蜼В({We^Do0A,0\cK+AkUJP:=cDW ln7{'$:;ҕe[ЕtŀbJ.YPwB؈~p@`AΗ(8bIܓkPIY!@UO=nFţ+Akw^] JtG㴞] `sJ?Zӕߢt-V䞎W+\*ϵAv(t"*]ɚ{z>%ҕ s= zlכCR_k`;Ͼvp]цt5dc-]U;VɺЕ a.tԮӕjOW/w&2~Rᡓp mרHقiѝ麻V CqFmC]榳nm` )QH;rD*vō|,R RA6EwCX7w|mz2ڳ=hQ,チVt觓׋o.X|ʏ~3^.W/D V__h3otpy۱_+K[3ib-Di^}|N|%>T\wnT޳aAóVa賦oQ>8Aml. "ZcҚ D`bjFtCW`SPK++)+h.t%hk%P][cOW ]9Nѕ~6t%plzy{u"k<+:ΆЕ v (=]DL(SY 48 '@s+oV_:} 3^Cホu:O~$~}XcV\~p/i:\>r)։};6w7s 7i=şg}h}bG 9xuo\ßO.~V}p Brwx0[mGL*ҡՁ߆9?yJ\J^tqK`;XGgoo-~/1b_k2p /<\|f;qvvÛwױAMiׁvp:0{2_.~Kf{=\{RMP/QEU@2K4QAk®ӕvOW/>ӰZNn6޾iXGYj0-GIWヿ_;:'ܷ NjPg8J߯O:=]O6yQPL^ Ƿ7ѸtZ]Y6zgsl9lhYu'kxõқTNޏ\]?ʨ !O gT|қS|*X78|׈ q/֭Ywpśׯ Yu;**|8!*Dr~:{?,#qBxOͧM)O3z#ϡy`˶ݫUy|LJj ƿ^Nں[z(mCo䂩Rՙ\ʍqT.$ra|+]}gl>@~Eӕ_\?˳X \NNDEH-쁛1YH0eKIYd#'6zC>2ShF՘=1*U%L>\uF wK?BQ36YmHK~kn\}!T0XxM:ugjA z-@JvS:*zZi4FJM6;E mYv|{I5K[[jn\u ɚ %ehqjJp-wO=—PE0;T;1-.6&{PtL\b/9%|N x49[g4CR+y݁6lٲ7 P:iʹ0 &?rihD5n{6;[_кG^"+Ј*I,]خޜ~d|Ht-őY /`ɘu!g5>h}j(*U˝ֽS $mM ;rҊJ3 *!kՕkʧMpS7B cf)(XLjYR5RFvT%x!Ly[.nLZ%|)/Vь ]d0`g,{PQAQx4@sVs+`5[O 6@rܪʃZ]Sb$<*!kk#2ѕ<8HI@LˈfP}]=^a\s \/ *EVGRga=g%k#*T.JvdZ[ELN=R `@MrVkA\c땀;X@6( (|4jUNàe6Mf@ 2i/čt4ZJD԰(=MGi(6'Yn5pQi8gSMpU< (`PrAvʃf Ԥ[p* 1iPb WKB4Iຎ#S`j?_Oߝ\ܼZ]s ,;[f_K tr?~i^'5<k aY 6_df|bZR'-V9lNOeFO-^~\txImW'W7}Nܜh+՛ˣWHC:޵q$B-6#!y 6, tX!Cj-_̐IDDZer4WW=Uh2 GkJTtXuAqe>Kbj͚s /AZRe.pmZN{*+dL|p  Qb`N$.?pRIZ$Dd0@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L ~HL  2 0Ҷ 35H&d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2^.H%vHL %?@ i{&P:[zL ιB&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!28@ N0@Zn RJdD&yV!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 @7\듋Н^V&Wڽ>v[7>P5~=?Ӈ@+9FI`PKIZiTK/tm5CJ~0*k&Hk,zͤ< usr8*+F]%iwu TWvwWWI`EF]%q59u5bUҢzԕp?>yy4dԕ:JB$dJꪦx3TW/P]1 ;$u`UCQW %rUczAJ2t6 Ӹãi{+C>\(90YZ0QׯGsߘ0pQ5?ϢU>3Xz)d<,^m.]0hm%<5s!32Bc>n0)ȳ1N2|{1/R:ZC9HengU1zF.tnq~W6hI}g,j)2o s2:hA@v-f1ʲ""E&y r ,Wp4*Xߟӷi}ѭR>xYF8i;s=@IoS߼({7;7uiyow,u=4>壺磋zCԂ#[G̗$c[#$8^"Kk{nt>! ژmuհkQT|wJ8uP[F "GBVs'3˄6Kq3U‹w$zzõI~HUΆ{&.]Շl]ʱQӇ4p%K$Zz~YHR9z A;J"uAWru`2d}X, #$@sꜳJ)8 0{m\ݩ3-'Ka:}fqҤHf1mW t%ERv5x T56ϟo\}ZY:+K" Kowkd gTww ɺ($ ƭZJOoga0,{U4)>ݺݽfXf, CoHIqe0RRR!< rŽc_L?rUQ5x~6k5D'xJi,ؽʥ0q5qjq%irE4rsI?J~j-t|ܟKF.:=U˿4u4ѝM`8#NN0lOzW>kOF7Y{mF,gMrH?ϒ/?a,_=ʚ\]jj(kNhC,G~LvQ9wtsp>8*Zy#պYZE?iMjY{F^je]?o-*5 vg0۷~6|?~ͷO)3oi)Rn%[py\jo^57beߠ&㮚ޛ@âb˭'gߔݠ#|vt>6t=5T1=毀ذN-kRYCB ;I Y<5O.kTrej I.}q}tMFj#1m$J!sl\zIIoq&Sn,sYԇ sի8,B%;?xZvD92@Tw ƷӖ {=rM .eHtԤ̀ ’:O0ƶYt8ª{s͉+ dS{mcp͹-6_v9`#Q[%W#ԝE~Z荕ALGdE"yD&pQ2s7`RLN0 d\n})8L` u1B\DQ{4lFH UM"ߍԀւ` {Vo} h-9 eUgϬ ގ v1R Yxq"|&eF`fE:~{b3ol<ٮ*g~*CȢ,dυ3rr? {?%2w\r$j^Ȅ6]ν@)႒ +'~2rT`؊Xpj]\0UPfs"yHM \A`;'ϗ_ ? .Z!n6nlm栳wCfӳv{jtLOBwAHAI DX̅#(h|y{?t0h<<} qjү&~말q8}xvLiN.]?N\b^r-3mc D@}A(qĆAS\ή[Id |Tœ^- -؝/&s!(5髙[Jh 5>ySԄ37m"X8?T/|ݠٶvG2Zط HI6(E1 8M@~ϦWe4r|NUN'5[n ϫd0+$'/OF\. \ j2vל-4Ew.mm,䭢U#X 5="dz^t*[/Uޣy;BkޛudOflݼۺFˇ.FAɝ'oi}y#/ۼɜ}t%W(=-YO?VBi]LynJ]],<]})O si)N PvΊ4N)םĴ: k9xʘiaVJ /;8/ ˌh2dYX#蜒 q BX'{i{$Oɲ4t @E(4&Pk2Nqcr!PV<+4iW4*ie׹쯹TBZAI9 櫠#rEgT8meU )RC"%MXgSUߜh˒`Rޡ©sHq9`""L6F s9[BSNS|ɠ)ׅ#A.NH F!iT>zXDHj*.00LgcIaF'ġŀ\v`eJOI|u^;Q2Ui8 ?T&o7.Ro"Hg@il-k xk֢zcPo<9h&'Wzp)dz8?8&? OQ8r5{֚R;CzRYEC")O@(>krleo޵7u5ɇ!SD5Uc{۠|v<-F63z7~|}3h]8>Us3yx2σx6m)rnȝc6Fwc41'j:<̍:7 ~z1C07}8J8.8(m"*ᬪ ~mgB>eԷܒxUzpڍ>J/ %쨑sJ=%6hQ:o ɨ`%P"s``І\Kp\jźOG9R rW|E7YeAb`m\F wj*^}n{i<2ސi_זm{X+kkoxa~^kʆGiP[sӾtжE47bx$P aDğmP(>$F JP✙Pq%wjuRƇHq:!{*CUBH\#(%FpM"4+ǘ<\rNqҴw 83@^)|HK##Hi-aV*qZNA/ Pc9^C Xg!D)0# z5 R2ȩ_=.$(G(]n#D a:a`UIG54!mP3!w>hcR5$i4shUީR ZkS~qb !r>[lP#W;ٗGlQ]lZq34\[3aH9%GpvJ(F/j l)y**PY-EaNF˄ oh84,0;w;yYeR RnPnSTUۮz XkDLB:c*! WR5ob=K-i{yggve Nr)ѐ, H3jRD69 Y2`HTK0n;u3>RFm^4mgXj\bXj'HZzpёБ]a|XϦ7gͻ2쳖%BrTBZQ T Ϝ&M˜x+~T 0?** HjS qQE e"̛X|Qhc6PcAR"F Tz멉V% , QN8e%#1( Cb"r[]^A|V{,_3ʴ%EƙO yYskmʰ_ `Z bzzj(?UwI@+8(Au幒{,έAݧw1`נeAP*Y%A$B6C/!$RFa/O$3RE {m4Rx!0hb@6Eی!@/KF~{靜֏cwf[w'4uL&WE32 l {aY}oY =x8L<}.LBj.U[(Tj53tx۸1=|\M;i^n<0I~<`8%ܰ@]}Ўp?FG|nS7CdUznAť>!" _9<kw8gƫy(Oy/$z l^wOUgWR,سX2+\,y¸e)'EIw7>3Hь^>3z&yRDpr2NovQ 7:`QQX%N+G\05?ѸT?)9z[뮲NR׳L~rRf?a^.rjը3jg'6-k+5\(Gu.W,Sd0)yp+h"K->^/Iv>txC=٨`v":@۫0GRa8TTK9[iyPBiB)L%p*E [2 M_cT 02LĻ`8xHV' TZDErJܘ>]>2q1/B{Aܶηۛ/CaG4>\p}?UMᏣR/u3䋆2E8R$g@( >kr6̛wzN<~۟"p]ck8uU|okky,x $l <1it}>0Q󫃫2~gv=׭}|A{]O] w;/p7n&Y@<t8/o\z0s ζMj?U!V:/`) #" $*1Q4hzFqc=߾Eܢ.*5s郓Z)CTAgi[FO<a U[k4Ow'Ҿq/W"Ϥ'1>k 6W+OGrl˪=N^3=f*PʅZQnC/f`XŔ ƄH9HF{h:pqj!\NSh)pVrF<`\@96sɕ \d\H/A/\WbT21C jc P,"RHV'@s( Lvh#Q[E)%9hHkFG%L1!O 6ƼO9/dM`)2">0$e(XT!$LiKBtS h) Q hM/95O޹>4cp9bd7{")n)^t4_,'gCьԤ%]T*ɈBJ.`-)9)0z'`M{5͝ ''-[X2lV% E .p:EwJ Ғ9%c9RL{BN BƒZP]gd! ؘE6[~67/~{CFɗ{\?I8WNjd &$@.8ǂ!L[M>F( X%7< \9Aq&W+Jl u;blbFaU1Ma]&㰤b.;NDjگ":'K ˼TIf 9 咶I:VT&P7# RIb!X̕S8 Qڹ9aCd`<W"v<W'"%@0Kϭ175&*M>PTPM@8JE(ByЂ,g 7"1O ͝9ȳb=/>{Jbd_ E^.nndm2AR6g~H@s%pQrT"C(]{oG*- 9d8,!SbL "/3|IP5Hk˒awU]}CCl7y@F&K|}F>h Nj3^O7k9.^VG}}2>_S"U2䉒x ]('N9>XI3l'Kߓjظ{8_?~OgL]zLs&W(!Nfnc5M^C|[;y{[K+ bH$"S2hz-9B=e3pVkg !*ٸ׃;l@E=I|B2jOKnX,5uqaR>̲Sfieo'ZϲG gN̹0%1/"xM$8d܈H Vһ&]% ٜmn7Ӌ晕@>Og,3#8\r#2sb$ wUQd *QdHSa *crI2X ᭶&H\BSD)J"S%]wFg}||=OSK{G_OyJu&]6.G.=BE*3zS~9/XEvb.]YncYs\ǣjdX*ns_℁E]r+\i8p@-n?8Nɡ94c{r:ۉmqB`rq/ҁORHFh%Lkg},84TNg_';]֎g|Dh;Koôz>3!LBTJJD,z!%S  1i:ebxb҇ULHiHJa)#GqhXO;fykt3Ʌ_g `+A+T"b|e eα=PX]B%,xJ)UJTq҄[Qd"sa {,? 7yb Sx<= N wsZVW},fvysip'M~ Ga9cU'3!>hTA(€\ɹOd/d$L9{oSK u)xfeJ$ pކl,Xys9K<}T\ZOY{:=L//Љ)r5+Cs"w>[¨&zWyȶ^ƊxI_Y:8ܱufq*Ro?7*ņF%_ƆUvW7]7g?}3Bٛ}so4z<NZH L?DUO ^߿iko5|9rԳ ߢ_úrKAâZő[3cn>~Soo>ͫ7eyjF<̀+*a$rnϚ U!iK](8V&&#{:ke$ra"B}#9yЍGỉHmldpXgp@mfWghAZÒ%$zCf8UHK47P\D#|K)x3ӛ'n:źX;Cg{▝2 읃y0cg19SnvޕfW;ήdd.p[(s ! ALTNLXAɁf1H5PZD(%e}\g!W #&G0xQ^zo%VY"z'>|5a*$3lAQ!m xPLeӺ9;3q$l1j xN:W~Z A]0^;/\GC@p+w)e3\]2 hMAcQ`N9E`->LҨ!r3E hs8On90Γ$xszL >Jc:Y=uG'| \ V q1 3/rg: cLbg C%υ'"y-x@͑tEA{; y~M~[֭A[ ƠB@.{s C{^})~ v`h@$W&AP*5D;9{MSU u-wc fR1^7U}ͶiڍԻvykf D$Z5! ڀ@,IE)1A"OZ Z]+{lvUZҿ]ܟ"J2~|3 w⏋XŖ!'Eq8ɵ=ۖY_%| Ӿ.o[+p%>pcVji 1r0noGnonV[n8͍ka#BZ1zL%3MLRbvB_bVJJl?kEdq\ -WL ϼA ҂搝2I@?ڷ ?{p¦Z 1L2ژĸ(=hJET^:R" 1Ƞ['쾨ѬQ}(}xs "(Mw(+TG]!Iq|+nFZƒJ\R2'r>Fh."$^"E"<ƽ`K.щ;!"i9́GKT U`P!޲C6DKuR:r` kx%0A$/+=ͭoffZ(ug[g.7׏]}SaN u~<ӹ߰7kϕ+!cyf?h6hC6ˋŻaFCJu'@~FTM:>"vB#fgj 0䯶u&>{Ppp6os6<9Ӟȹ lT ouöv[لy^zAM=vRΔN}u <~mQxJ4nw)#: a?6?Ra\VeuEt/4bփW="zn;7^~Ítn旟k}zhx`e.י[^j IOSޡǺ/V˄}Օ_>|aųNc Ǿ $c:fxs띳\^?MeP31 'Lŋ QDC 4AYZS%t|1!"]5 ϸ!)mTMJ2ڬE,JX)QX51zurэؓQ5f||I=pju"ZZ󤩐Ñ 98=?减WH}|0oق`$v)&ap#d*qʍ>Ln4C:GJޔhzZUARI&L{9^e3W[.1[X>̊7e9zmZ(@e+N8< p+| 9i~Ru}Kry+1Op`g*}N.q\9 b@ml ;`:Mԁ%謈v"j&Aq%AUaџ%R#=Tž`KN͹D @[֚NX HAj#Y)%1:$V̒15B֘Zd4TAt zO,) !s*ɒaL"%+,ѦP4VfAXy-\,[`)K2Dy`\_x|xQy_qF{m/N'i#vw0+{Lk[cvpoRοiio-{GPC@H,N{rW"O!:/rv!12TkkXB`l5-94.tk漢nxkz~ҥB'QErR*g, g(i6ʱ7D4c)Mw@oopM'f+CPKɿ9Ɉ7U^{6nukё%vEzo.8:d$e%p]uEix[ 6輍 tCכ}6hA &ABNf5,N˜@"tB Α o]qo }{$=蚿*t"f}{>0mb';sB!|ZA 3^l,N+; &tR;P ȋ-x1~iP"BTux6:`SmzBhs9] $]) [}N"fH"hJ_X|cj]38OX-'*H^Jjm2*la-@y"-7ܫ1J8~tj_bGk! S\B[kI'U"s OB̥A T5`l4 &\};A(t)1-5m:m)zjujMV{B[f{"0RQU!ІݳlV)a/\$_+ڷ-\a0/d#d.FmkllolN:ǾQ6rE|lF.XyI591ٔu@ٸTsBdmց#)Ⱦq1,RLH!f)Vkk[Vq6WԨn'=zG.\\;T2WU`1WU\#N\UiRM4WJĻqyyU睛i?[f_nk*n}NE<^3~>]5^20%+ l;!wpݗT$NngA tOu|IQ!l>FhOTp1 %_- CE.GgEL.FL|M&8dd^٠՜ԖB {i)A`C1` h-s.QFh7L"hRZsJh!rU t6)+T&$dINyb~5Q.p-Ϋ$ͅ mIY?'+z^-A bFdRJ-iP!!ԺYƘ.`u2Bh :I( 3TKJ;0&PSJE6B @4 pW,R^Y$d'BT13` XlF8ՅijRC(<ǚ]gWJ3) T,; J)A#+S*Dk=Tv@8W']%|3{PɦȞ!چ̊up8 >C|xf |,>39sƐ;P*{2U\jPգ5R) 5O{n^&6Or'h\_t ܛ̮'L?iIٿ ?gUo|7익$Xy/7x{6)WS+ZK:ieh0S2xDRF!. e-*) ELr:H˞&rHg f_<}Ў k'H]vi8#,kstK6y?7w>Y"B'BaRĎSMx buJ\Q¦ˋE%:إW꽪w|+k-z;VkV jJ,QhPZ*DN~n\{A~%%tkt oʬX 瘈XG!%$5KaR/RLjd<)Vփ@ԔP0+D3+2bn+/糏tHufw,dl{ *En~y>3Ybӎ_ok>`}1&F' f`{ #A"$ ģ(1PhmDHܲ 0<00rQ.)DqdPHS&~aZI_IFw*,r(bQ *-Rav 6>ƀ 9p^'{`euP 0]骇 BGXQEPb$ŬIXbVgyT| }7)8]*7IY?h7ҹ5HL3Em=~4gX#?LT9Kmr R;샙u"&^n=mҋݯLWT*vnkunl׺(6oAL R_$s<$%U7tq,ƂhGJos9җ]<~xmi{b<4A"o2_Ծr/Eg:@;3 Wm5=ۑ{f>oL;?h޾nT|ێ]؞j[d_vmvaU]$Szb7 ԙu杼*+}""IpRIMrǧOh.RaFT ˣA&ZP@0.Ȅ%aΕsEp:0!ye&p a'juD9Q8B;8rg ARZ` ̃{&z]<- Z6Xލ\v9=^+k)L@`h"QK`QE57V"FMsFڙ_wH9h1[0k4쭌y"qdk_hm @-` fP% S 3~0hLO=>"b%O)7#r+j6(-Ek, VsJuBR,Ӏt5dhxԝ-.沽ck9Fq{ځђ27XKє{0AA聩"TWhgS=W2M-bo8ky"Fj6 eS"i>*گ▱? S2`K.- ےA Q޳oϬSz &&VQR1q6 1NWL#,8ZI~>;\{89}?*I^1)Sc(=dRt]Un~PJSj}Mv&' }f: rl//w/&/_ȷzwa)RXb62 eqj\Mq}_DFzvȗQInTs2(PV0k9[,ݗ[;/_Z>" ol_6u6 m6x"//n60klhf^Ei^zs-Ώr=-DY\ͧU0@jnҌW@;.obFGpcq=/IMgƀUvRJ<|+oLf}򓜞-2X "% [WZN`~U(;4B(N俍hN єPνYʶʌp*E%;n̹|e&#QHYu2@30{-#SB#9[znC d>of &Nx*vJ8#B#Mi#1&(ia 9N?`.0 B"Sh̄J]=9 j8uS~yJ`ZPBܲɮ!n@Bܦum=4V_,8iof{搜{S?\>q)1PD;=aV31m%NEÔΛX(=EN)d`Gm7 ,3r#c6rG|J6,;b!EPX4\J84[.iSuLfLƏ_G6T. HD홌`5qϭ xcWHl?!s8-rI !{ j-$& fh/#vHHLBDN̈Φ~<ڻP[`-,@# YTH\yƹԒp=8 ܐx*X!%)8`,r9G(XXAs?F}"`l\+"̈DqF@x)pb2s:Ty &XwARZad`(֠`%#1)DXҐ'2CFb?nfDFqg^\䳇t֙KvEɀ.nJ0TT(ϱ+{X-NYtdx# ~}a68< 잌Mz4Y\?㌉+ GpY[iEvN4eNVXǩ%ƔLK]jҁ"cBO@t.;ѷ_@u@.*K`w&ww؝!!*G[lHyL1-O!1 :?!Rb&(bQPFe)p-a^jF>ڛ-hz@H}a?Q2v%l)jBϥm~m)o!mTNwsβmLMCH>%r(`)Zko2iR-n' ;_ut*ٶ,fHFǤ"9̰4N+0I>?APy 4LCr废^XaBڰo-!;ŴCE&52bRIؿkkڮWxBzmS [\@MDtOFBb Yvb4FBH*ɸ;HppxP_,ηw+ GLњiWDaQ-..G &(m |jHH$W6_>qԄGҹ{d ܥPb`cp1kNЩ靈y\N޼ݫϷ'޼;Dɻ`"@CSkh*Ќu[i)Ұ6fnD竏quwXx],qR\O}?A/lpm5UC- _C"Dđ~x{7!wwF6  lԯFZjN@GkF;QU_1dLbHuT7-!P`#=aCye-=a ΃Ah `&b`BK8jdqoPipNfPH)cכArΝI'?5j$n;`jKNGErJJ3uw&S꩗0EI%3ȖXX_p2=d,٫{-:У r9mo6ǿJcna {@6 ,n7o.fq>=y8w*3nnw13ݒGQb5(vh07dZ:+iLړS[ge@5 }[iݐ}th2Qh;BMcVr cc>ވ{o狷}pP?^csVmPRIQW>͊yY]+ѦZ.Y΢Z[2뤲Af FZ,{ G:Cu <H ]=Pι_Um-*"ֻ_~E峥kyr^6(&H&&S/ P4Ŭ7>|U6F*OlLUFNziLSi8z]'d~$OB75X#RD+;fȠ# |uBBWKS+㚟ᗁA%VNkXw^*47X+^=.#V \]w<Ξ+n:T7W4s}cն/y=ޯcJҺmm>\`GJEeusW #XaC@/ֱ1]k;TҴuk\S{= KסvMGւ":YӠ>[|4Gv0on]K6e;=>>rە~E,G,2Vr?mX<?z˯BM:bv[ܼ]܄Żݼw|˓g{~wS`TL" 5L9׫8_Dx;vG2fEi-Kmv^<{" =Ss->3u$Ǯw +.&ovՄ){X+]IW_YbMCvoP>Ϋ-_3l^ݳvrw~}/|tخnmp{i~|$wī6祏=h+}Au][  {lW ݷG?}y3Zw YёS+Wu֑O?}Ohۼ3WUsug.D~_q)'~"]th>tߴ/KT9e$i~e>S"m#EڔrRmַ~ĭ/y5FP]_(^Zߐ'Z҆6F焆@NulX8[k"h7-?]1&EyIDh"(Qmuj١9XxIM2R)#?VRm tuE6\Q7JuFMN~渏XnD0!Z[G: 6-#\G&¡١cK[7^aSRH~ڽ_nq޾K k:lQ1yᣀ:D[#m 3-`ۈu㚝6, N>ⴉ~ڹZ,53GH?4}y%{]G>/֯ )Yė{2ڭB6^F;4.z2#$4H/qp6voΗ՛V }Л%4u4$lgwաm.?7ys[ZKE Mg/?fˋݻvvMᅴ-W[,I6I>lFO2vz}sӮ&࿾^[)HX \1.\iH]zJ_jre63+֐\1\iK]ҙ"W+kT6#b`g+."Z*ubJTE&(Wz"#b`OqE#m/+ź\yUOl;\1n> Z\1v^^L8)kY9\ +v5%jw:<bWÀF׹\h)A\F-wt(wlQPyewu7oF-]")r}V~﷟lle@M!q/:5k~kG }^ >NAPUZT U-FF+ZDΨ OUWZVgl-|=.o!uZAP̈́>\_gÿjvsu<_+s RAhܟ}x6'n \%|.v"n70b7Ln d$W QTqE)*r5A" \1Ǻb\Ӻ2Le ʕJ@N +\1V\1+A)ʕ^䊀Alqu6I]38ErTV+؊lq]6+*u"J-t ʕ+\yjg\MiMvt#W~ϮgBc µbla0RjL,+_Юlqs+.ubʭ)t ɺb`+u\!+>m)ʕFK69_Wq@jy*W J]{m ۰=D1Fo̪JFTPe7]~.#uI zIj &cHwQ9]CAu Wn|Q5Aac{Q/[ VY>e^ԳۋKϾJM'jMC5bh+osXQ1I&9P3@~bڙ%(-1#Om>:'GF=9''IE>d.rŴ|)eOQ4אS\sI|q.2v-(U(WQ+\1pF l+;D\ѣHŋ|M1.\iN]"W+K\1Ul;B`rŔPjr ѷSl.rŴF.WLiU/EثQ Gwv\j 'v5ҧB\--ruH׃FcFrEVlqr+U"ubJE&(WH?BFrEJc]1iUSUjrmͧԓz~IDI:IԐҪ铫'%Y9Ad""Z%\rŔ[E#WFYDFrc 9d O^DyE?E[i2+>EyclbWLkTrŔN\9R􊞁OqEVA+DYjr$`h$EV% 2)o=΄jcǮ%hGi Jl6 +(ruh׃&#b`v]ۋi.WL} jRReXV<6&Oe*i2[<,/4葕NG62:*Qj-Rg/99Xl<5&i0ţGc"Մ?) ϵ=WL;ȝ [UF ׈lL&ufʭuE#ʂv9OqEֹ(mbZ\FUF䊀Gl.rŴcJ\MP!#b`3ȸ"WDdrŔW$reY$S?\1؛\>ubJ]xNQR䊁]>rŸr+UtFtrE|2hۏ$WL~ S +ܳL8'-\ ]3hJ138[Ȁ \ѠHB6rŸs+(R+L-?I 5* h=3MKIZrŸ2iM S:(r5A$`3z6 \oYkor"Je|/F]/_ lƎ] u# Z9֛aXJ+YЮZ،䊁FW\iL]b >r%\1\iJ] (r5E*Ke$$]GA7 U2h2,3艵[{~^m2ӛxA=deYn0>&?Kρ3pkȟٕ%yr d6z''ы5F xrgs)R~%WJ\1*(WZ*is\1)"Wӑ+#&#"U\1)S+\$re$W lE6rŸĮ_~\1ej:\9)d#W kۍ#I8-%"/0?[ ##b(6M4wl-M.1@dwWW̌!IAsR@ԡOx?}7=${ ׊VpD5ZIOKn^9ZMV9+3h:M+4 i+CW]hJ/9;A-rju6C+yCW1 `fp=-X$t*xEW!Z]@z9tOW!%t cs;(۬.ah1wOhYږ7W^ׯ__hs;iovFjDu{ׄ2/I:S`ߵnw'lJڦ|>5Anv;O';#~뷻7=;V:KxqGDۯ88kŔku/w5_E fŞWY}r6~E_ɪo܈kGeGy"c~DL _|D|kLy`^} ~!ߢ79?q:s Hm]Wr~O/x [2z(Z$޴ &dr$gUn:[BҮd Ǒ/]o+HE|U}r|7i;_wgf`6>\åLO(uw5R =q[mrLlM &:)9Er ̍zC>2Ь1{.ԩXRU֩qorTkņUST{#SX4zg?.4n]|" TĘcގG'gjAC$ZA69RkPTA*FZ0Q5mЌ}׶WzNѦl$Ztm1Zzh ۷N!bwRwVZ+Otnf()n"7M)cĜDhY{t؍q ٹkO͹xޜǬ*ګ;9SSI{f S $]1ڪTw$ף(]opWc1ɇ%j]Z#vNR -rI!a##$63& H/*M(-sU.RHQՒ$DH,8dk DZ,WP4 I;mg͋ŖbS}HFu)[_c 37% >{(R.I 0kjA 1Ȏ ўF }m.5뎼0Q#_f| E US`,ܢ k v**6(:ݠ-;y9xC9ۦG`CQY?k(QJ[e<*J,d.VN5!1ѕ<q'Ygeеx ;udzEKxٶPU*`{$yc!sVFC*D@ %ٻV(V*RO)xqs'.U!\ "U+5֤l`WP&4$:lI: d> A\R}S:n3%Ce:|=⁳d2 V켊޶X e=ಮ ++ȸALAAX'-P C='X !a@YPќ=4vҮSwD\*:#χPɨ[s6 : sJ@T`rPfdcBQ AP{S e*3RPHqcA x!Y{yGD)J6NjA!N) r{&ETݳ.RVKiL=#Y̼Ba0ƛF_ʔAIP 6A ZS2DbQ{4<ӼGwU+czh2&M ԙywA9/w{1#.UUq1,4ߨ(Pl^!0F!N6`lb~7Ŷiu:;[W f=fgˮt4MG1*4Gy:)HCҕjɂJUe ,#(yCs ~,{_4+XqmC^Z@mD|uj З9t %pvk ) Fc*2["<ѳf %Pv%)5cEX A5D/#+P(\|PS`iyV 1blGj31j 3$2N@Gjc~,AJ35VEٕ P?AjDPqȱZ5vRR\.[mJyw rZ]X[o6_WMzy}Ǯb}\x~ 0 cz\h-f=8zJRGEWc 7?ju1=yכV[;Kh>}:9Fwm=ݮvkr|'ucUXOWE4KrErb3Z'28ʇ8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@߭k̿%= ゞ8.牋?qkP }ӱtz\\Z\w*wo0_q)hT1.]#;1.=gC DWt)t5_ ] :] Q ]=C(.’ OEW.@^] {=T!Ơ8*e.t0b0A~]c^G"- \t5:`̡@IZҕa'z=>Mǰ.i͛<ی3|wZ_./pڜ"P/>]!zS/qxu^yʯ 5uQ@I- ]/mP)h߾hLDP8V-.>A!ůggxuu:~.L]Ew:%_Y4&:zu $nqxeA;kkm#0C`dg11ɗƠW)ٚnRM-dSd[=X}qO{%uDԦYmicέ0;˜܅(F䙸9â1eed[Ƣc\ggBeU5]vv[v_K^Y̘C:jੴ+ڡJkjBћ4DRYC9-(J5 t ?!o߾̦2VC0 u1=QX[ "t9L#*G2ĎݑK(IȽDGj-]`Pg*坉;% f=]@0:DWWҮUB~VP K+8]Z)';z WNW HWI-EJ" ]%wZJE{ztN.Ů^݉]%RvBGf*]ˡ+)G]ZkwLhѫtЕxbӋŇw7f3j3@ۓ7Ct%zڶq:aQu;+@:vJ(qOW/!U,\MBWaytPRK+N0{ދry=.ys泑3o&ٗ7?|SJ}R;m:yJ-wD%ey1[IqfIgSہ&iw 0ΈLXzEϴq{M uH7tFnHpigܜV(U5u c8r#t\BW %)Lu\BW NW ճЕ v) "w&В"'vh\OWBW ݩ)wLpegԕWW %=]@RTg]+]Y ]%3*]&s+J{ztv`JpUgf5#*9=]u^ #C>,]mjf(Ůt%{ڶq \w@3tbJh)=vJ(aJ`!J ]%w};]%Rt\;s]ڰ03lR0Jʥ_[rZtW(mԲvB\߲ihVfIuMVH#wh^, z9 om d(bQ *-]+,he0P X#P kKZߡ$cUʧePF̢SeWf|ǏA ϤOOG#xΔ.8\ pD*R%|/O͙o &~ )Iaû7< p}#=-PrϦ6$?|r=^_G,n\`t>Iܕ9ޔ fָ ( Jz_ӑE}4['BGܦw:!f8uW ;AMsC8% MQ*E2Y@YL^jʈ8g FQ6<)c"`y g 5d_"҉罪tY>KRR Z~n25CL,A%s{9^/;Y"AoORwJ9t2:ϯu?>Ւ!No0SFq^&F~B G\Ao|pz JXt2.sP?$He)N- #xEx)eH)f7ASJ-n716=]]dTv043*xt<`d O-go·W-ɧ KVTsZԂKwlMa|X>Ri c@QYgcDMx2RqaG% c2rE؄ A2 ǨU:RFYJy>a5Q%Q51\1AMV4f?啭Z7!K8PehZ*ǭg  ZX -H ){n@U 0h !',u!|Ri^BBX1W+F%Z6lj E giv72yaQƢA *Zepi# $ Xa,OMt܏0h4i@ſRyJ%TEWRA0,r N "-X01Yz׊Kѿ \,`ghb鞯/P珰gd8T%yTQTsuny_өjD{DVac ϐgZjEZf0S+-""r_)6Q/R46SFC kPp5Czp)lojRzx^s%HcxIU uC2lz5>nz=0O0!]?P`PwMM_?&IHؿrf31Ή#^.t ϰӾ1rx_+{'!Fŝ&pht`0t'), ģ(1PƜ/?x'ZEf9L_( F2()uR'&H,nd =FBt"SDc%pDpoGK9 Q Ƒ58OYo{Xdqjқ;Tӳ g<ͪx lppiDLgsSO#IjGz#|P6I4xH22 ŨeVZ ៞yZa$׍~vo2PP8f[eL ,u1MPZ[ΑFf06 )>ʹ!-ExoYQPp4F_몴h٧6CCHٍ!a @ dUoY־̿s7mNhV7>D uP 0iI} B'XQkd0 q|Z$?o&cPAM/0*7NWegh0ˇ0۷U`$Rl52nH5 m*d -s&?z0ʟ_Eh^5u6_73eNgӚ^] MzU0K9o:1To+OvV ͚ Mʛ7*_MEKBa,\PO.^_A] ď˅gX#&ˮ[^4X%b,]bMd.W3XV=nʈiʈk]e?JOX,W"mr_k:ָl〠|P)bb,}-b+iZ,X}㚦w: )# Ư~`MK~åmX^ɮ'XP19Ⱥ=C\XVw>;$乚f%]tbupϐddq,aCU oc󲛨"cs{"Nac@#5Fj[qt6|x)eOY*lZ@҈*ay4T$P $qQE&, s.K+VoPKd-L9jElQNg Ȍ3 )rn0fIK=ų,c9,aɤÒݹݲ.HB/XS0!<[gU]ie^D 0@P]NV7.fgZ<\;$/^0xA.@{H{ڒenx28^ϖPf٠xu&xoDz!õf7_m8,rFBBe,YJLtL$4=5>"b%OkEW:mPZF0YT .R I}V|QY |^ZKr 7_>c_E(-o 1!IB0ZR,e,D%hʽGNa|pѠD`Щ{mx+g;9I*M!RO|MbWW^oṕ1o7,"³)/?J/0S?{Ƒ¿,#%9:؍{X`CGŘ"Z>~3$"EjɴM'%hz#(׼M DGeun Lfx6%+ o xG6!A ~v_gqb VX騱,CNI ,YdB*|^RUR$MX)AELJ(cBX#je6XS")@.2Kə hrI|Iw BW!&Ζ95} UA4:bB" ӂGc!LZ1NT"R$Xf218\dQeKew <"RC6EZ \1kYQmW'%O>r_C]Cݞćg˶2jS2ODm59k\BS+^~ϥpU%'R]sJR JŅOs /eǝO6fխɉ.㭉eI 42) dso0i\dj[jlJ5[Xmeh %l >z2xvQ7vpiw]=  FA[q$/@@K`2&T(}^$˄q6-<ȱnieB< MM4$#\2. pN爙Ec+[jla4L(XjcO5VkAAJ@1X(ML)P"%GOeEU{&n"d $aHc,j%퐀#("j4Wl[R9+qo8}E-"?XăEoœ#gFCX{tKwDوEf8A. )D Qs#T/Ho8Vw-q\>CzԁtQi:]l.l6)K֓OY`]l`i'&r$$.>]{Xmu=G0av%[q7x ( -я&b>rDZ+92d"ީ,x`LCS#9%qGY9 yv+P7s-fF#WYd)dx ]Vz +a;9`FL4KN^ Ɩ\$XyF`Sf,ӷuN?q WVONA­^_,{tJٜi[7b:9NAX99Á+08;bv-F5ah@cQ&g^v"h6JI>/fhoziIGIPКN ̥}Ŋ[w>/-{C@=E+c0{kYΨ? /HN͛h/ht?G(W8U!M>:"dCSyU]n ;:l7-!1:y{Ƿ]/}et=~%佢u=tZriCbߘ câc@9ή/bkgNs{M+_\gμ=LF{8|o 󺡞opn*ykw] 6-/N7a.ʻmÌd!|'Wڀka;mhkt[n5k?ƀKFk ා58҆K xB4=3Β:=^ދo);#$K8Mr%:pyND˝5z- >&%S&k2`X% Ld %!2'A̅ƒ@o%rzWM-S'4x2Иw/N8~?&}L>dҏf}n:H(2SHI *{)PD irVⰄ\2;vʢU*.QbZ P毨^UrN4wDUuE[<^5u}θIm !(dql#*vq G=*e+춦`D,-NcI&18Q$` ]p*Ns`Lq$1KWD#3#r!H"04qe6HPΐ ;jGWgKb{E۝W^,5tt1^x,oh) 7.Ma<]F.,/ 2 (C+5s&.g $"-6G#<:ܔJ%ٹIX KQ)(#+ D![*:k87HbQKU4 B?킢5Z@AYo(6˟cy@zj#`d>S"S13fI ӓt%Gf-xA'Ya%2d?%KouPjz}nٚ >+lG_qg) ewDcx[[  NJqa'{}6xjgGFz^ Av?M"V6xqZ|W'qKDl0L[2h:#x~̏-ZUKEF>AHW .M4oxߞYsf9y=O3vw^ƾ5"c{i~ ?jnJBGqhޚ>҆k6/O#+UMyӻqO{ek~l~l|s1;:0[!Liݼ[Ho8x1k8巜^7#Z=aaxh0kRڄQkJ|3Lݻ\do<ʋ+I_W1K_!Iϻi//J-SkBxS^@Lvd=>h_dnggq\&؛Ď"8kК1CM!gAEԷNT HB7&im8, ž=PNf9F)k%<̿& Kop>r=A"w:ULqv1ޗ27Ďen[fgCy?MPivh]?طi4/a0TgwQӍV6Yh4yYwH2y@Ew3e<y[4jBS+4Yhӄg̳#( rD T>X725|cbY@۹ \g M [wژE.7xJ: n[}jնct7SZ-I;|}v1j!.` {V;DG>e2O;S@gI*Oa,G_~Y. +䥟4}G<M."dR=FT]La*xo&я+Ѭ:H/Z4$Y0lk7Yۂg'z9/>(% [HViԛk&y9-^б%omâNz~Z̏] xP(b2.*7 G~uiJ]]Wۺ|UFG+Du<WsgGL- ǭ({Y84U#JϳYffH8힗48"(qDؽHܝ8 `DX 1AKΉ J% V!dzX=ɉyOYiS\S÷2|kԞZuKW#D(խy;[+=YyLTnFр?EЍNjP ~O#c*xb0hnrY郌 r沕YL^~O_Ryj<鿟 qۛm7edcj;g8m8 28@`,2g$"7J/rx :'H2q itk8U!])YZ׮cLJ+m$IE#`/ ],2fS}h;XH5/QI(C$V1+*#򋨈RgeIz##UV6ц0KQZJ!&%둎MҼЇÁ{՘A[ŦAC84@EU5[|ZyZiWi|NԈJ`\f$tCwn 13䘼tKp[pxDi3v32nwn9|k÷~g`1 ~!]P%L$Kk9 0Z\b,1[@fDcrhq#{nΕx>}w2(6EiIv-x2юS&VtǟƷh5ھǦ$?J} in۟̃}}~&u,?mGuF;ݸHDHK)8""aRZEEJC;,r9"bHZ`H=+mIY΅iYV%tcS`?-Uu@o"G (7i*Cv-s&#]p?tȘwWe~r{{w˯ϝ&n֭Nlj.Ȉair\ȿ#pCpT^Dzb ųC?o_En v河e #7:ک|Hym:h6fg)hPxvSz"1 6}i+*i'܄iհqsi""~^eIQ!h!" ,^! P:d,i*= W"(na0 0 PPDi5ŲYXI9)qBYx2Ag-!F0WǒV8!ۤH9rS;tUgacђ::%$F֏:c0E7KDG[&5a>MI4}'kp7/M!eD3:1[ 7.hǜ61"Kjfl`rGng'ib(m"9oۻp_v\MȓlTaT9|L/ aз[f/N|UA5Rr\r3tPMaa5E7y-_o̖ ~ IrVݧ0\aY &e˯.1uWkZvV& &MٓkM}Ø\M?vOdQӵ"k3gz<;Fqt癇iM@Ϸ_hKjK!_VzkۺÛG9;pZRu{4So-6ŐRj~-ٺaV+}-p<*a-zf %n!|zq^`z?ϱ^f6=L=`2ܸ}`D[ttǘoz;"M7E7 7;k~/1SnIJ}NN!:[.p C<{ Y,9EQ! rhs4\]nx&^EWQ(5tHYǤըgkkK-͠QJrȘͨ,FJ*Fn7?X #:tVHOa1K8&)2`x@qypDRz0zU^Y!_54rB뷱YsJؘ6^-]9LUąUg#Q*M~By@>~W/ :]W;- M Q4.7IxQ&#ya;>N1c<'*<^t`,*nyW˧TKsӳsԍXYD +Nh-cv 8Â/wT?./3=) sf Y`edMT&KxJ9 s!hﵬKxIu^1*r9)EB6( I q„F,3Z6ZFE͸UDAm! es4ym\h%Q y+ Ģx[<@ADGNK:B6)h,Ϥ.t\ r e4K,'irLY#%rB4u:RX'e/ɜMJ!imn4$;aEr`iF(}6`Kxf̝*0}FH`%0q:"-3)F!ӈw+Ax&w\ᭂFdDC$6 Ր 6sk)u˖[a"=@:p$zLE6:3> [‚;OhL'R&˲?2T!8] IHERr@%hK0H}iju-u^378-'Md_;z}հNq@G'BGR[Qk, ^LgXըu,46zhs%:٧9٧3sq>DNڒC㔷 \BgrڐP##F!D#sn^Q[ "셋`I"Wnә*z2]2<-giGt}mk~\OSFHX0s bz"hѼr߱ $2mH^6(o4goɩ8!w`^"E_hTAe=ˠ"E=(o wiz-z>ߵ1_A+Q5voNߐ9B/ BӷlwUylc5W񓾪訜*ϛ!7kPkPi[B輍 Ѝ |sANB}`)HWFkt;$tD:3a,ǸV92_9Y/ ݸ@~Ԉ{wGg4k}@ bvmPDPϒ-xq-e/i_*@;#wъFh@wkc;R"FϛI["TI8H}DB)Rѣ"9!;4![c T|JZeaE]JH qEf1:9M3/i =cgϘ&OG?})x>yPp ӂCr&LriUDNt2D2'5i@v!lIx-&/ ĐMר"lT_K݊&m.uamIvvK˘jPrrlrVUnh5g]~|ۉeiMFx38N^v8hXOx\Wʻ_r&P :GD)LNA8+\* @  AGTU˨ɑYxk$LB41;ѧhsoR\KϹHնmaj8JV@y o0Ni nn^?-v̊|$/&`2yM**$}^D˄q[nAqC)W t ID!E1R5D#\4|;f]8C,[bWgq\r1{`7v9Bd2$"C4gJg1z6.e6nJoVP2R%bbAT8_ 8"k[8 آIʨ <ʠ l%p3<{=8 ]@i*Cʒ[g3N "IFg*g}hh~ߣ탳&i{%GnrixUC=Cl Cg{8W`=R߇aWv` vOEryؖ_ $E#ƉGUoc{YB;&T9DCIgsXiH>F9ịUh4B3;\F@ *$;A 0Ut#ȹ8U{SWfZSz5=!Lc0o549"d%^HpAX°Mj;9C,vEsܐ{Q#ÅH/(kP4' nQ.1!:\дPb1bxùE"G'&Li-5raIKVǩ F&WcFb?6! [Qu_'l:,%@ŭ/&Akj40|PXϋ˾PLvgN(}Ɍ^>pfƥ]݇h}Q!$HW&* ɰ"SQQ?HMϽ|AH0u ΋; L N(7_iv\m*ࢳu g6D޿Qwxa=2jdj&ts(s"W6J+Ɖ|`Z حׁ.&^%^ ˻[i5Ӡrf5#$ w?8Fvyql4bű-P'[HSseSX+)T)k[*Ќzq|^ o18UiY-}DdYǽs\Ǡ{Ap'EPXDBH5p^N#%vf;|ST0beR=>X]f&(>ktj4ӫ:K#TS|]xǼVO:N ꚜisMa9&Tz}jqh=*6~3P){T)fLnj,)ڐN Bzl1F#1HՖ!"fq=R7+SIlBEL.c(,<< \K,hxk9k*3iCPKgZsȏ;nwt4zqǃiZnw聆Z @ :i^#8Xl$`m9`+:\);׶w0(DJNcRɜRafXTqXG ۵ X,(XC/QzYYBڰ&Tx@|zvF`fR#+qBL@J1?v?ִ#PքFR*O_g"j%zаxQ1mZFj6qޥF{L1B=ERIEMQELb`3=vKj.|*}7fqIOku40.pهΒ*evuP0F`>[ZSh=j0yfLLlj)7FKpi_8Nח@S'jzp^%(%`X" 2`)"Q J1uf4 {?]OU+a`]!س`r`rr3r%[ǝӴBoX330B_ y"ŴU?MWHÃ`RΠӊrX:wk=MV߿5/$ ]O`8K xsF<ɮitV>Lkcar2>^8^fKH̗ŅnY9Tv8_'㢚1,vCẖ4m-]5CjT6X)zO}e5УyxE0ӵժu QVGɼ*I sX>?.}z2 Y?fIl}K޿n~T&(?長3r ǯ޼N{y߿9})&ǧ޼/0R0F[V7iM3dFe\M0kPf;n DI˞twYc,N)5(ck62&5"֑a rbrX+7r@OidSKaʎ#X5W`Ϟ\}1Zs},`M>}yeRo|6u$Gs˙R!WD5+Pr۩m'w.+yb&JR:`Ͱ(\!|R&kM"CxJXlQ`#($t~EB'y*['f g>~!sM$s˺@KG 1 bJSK|Ps09hhsύ<`ż "6L ;&0xQp.Ei;ZPʠT2yI8^Ĥ E !Ie2k3k*eHCP q`8-RLpvˍA9́J()w qkea4a V0#)(W93Z|vzUDUwAqOQUώ)'eZdy`1/j,LVϝf}צ^9ji~tqu7{dqu'r5"+F-#Fš q T+6z% D~: X<qe|URV\=CqE8r>q[d,=3ߙ^aJc 7 ^U:4>}}z!J>ML0bRʒ/@/xE6: P}a)|e J󫣽ȼ;Orfu87X$>eQHv`RY< \dWMhSٚOӶf ֘3~VI29=D4__d|tua'&;D"PR޼ZsEnz:!S Zv7$*kfOIaGdU"=5'Q+ծDz#ќ∣z7%=qhrURF(?%e0W@.@" &*)osW)d>~Q4Sl`,fA^>;-j> \._}SU,g\ jrQZan}hnu\8Βm;+1BEaIT^/a^^vd6^{BCzK<^d N)bLn(lkr|`nF mWGx!XWőxl.{~0RE$\냛s]q޸{_ ̙Wfl#߭O,  EY IEsQ}mh(SUUZ:- ;SV,he1T@̑H6 _х'€jgas_{W%οnUD,?;T.ǘ1j[Nj>t,u u{euKn$RH6v|m:?SO}1R7%E=@ϝ{:Ja؄Zi)c)M۸(8 )/Df%XEG(냉KM-ah0pHn83Nc䬙]`(!e)}+v2|N*zP9$گGJRzQSankd'ѰQ& f_RBP%p.IGQlfS* 9Dt !ZEf967r#+U݇$Wv|[[xiHR$MRZ.4fՐFV4fЍL$q 4b92iq12hb5 TUQE"L/! FIrrûOR@ %' ʋ0L}xwΟ f|گ9H=[m;nwKW;5ʼSqnx"AhʭNӊ8@x-k#rr%H@k(Z90sD:}SЭ=O.箨 #ɉd0ڜ亊OIakDjD7/ Ig7'u9^]0~PGiNl.ߡ)Y[zz1Fh2gl Zt 5d̯\;_.7jh.u#PL>MI<0O&ݵm6bu"'HIWj ~c*sF?ur[{# g NsGڶ|~(#eļ5IAJ8;(vr ? w:ev>1G2n\ZlnwyVuo1_\Oog%>9U{$39JQً 94b:%u.oIQ,&UKEz0xv<3F</F[࿋w㔞}\GfpZכL㇆X7?tNv3^|&Imqey/@,wt'n^!m}Lm}B{m^dm{j4[=kl$ K5<E|4XݥΫ-NV;\ci6 ڈe`_X6¸쐨8n;6.t#]Ob|ۂêmkyuԶ? Ô1e3M̲"GP'nǍuϞ;e,D?6@>Ɨѐ%?KsS*'nɑ1"{]H& TJ`/],1}y# dx}=[x[øn3AŘ& $\p BRh^1QͣP+b ZfBW,b[lnlElɁ3C׵7g/]VX+^Qʅ>%H ֽ$rB 5: lۡF2V𲴂 /K)j O3n%+sP3Rc`9 .'l~z_gS˜oysxb> cnPz['m(PbGkB98 LD˔1Y 5u(*z%ӖOϠǍOgue".qᆰ.Loՠyׅq_dXJWM$3J$nW@S#$:>Jn;ͩ'MDBz3 K"5uRRhsۢDaRk(c$r g<T&ƅ#)ZEMb=[D{FF$/K2Fh/g7:owӷvJϗ>Z3#9 JEQ)6RL> "&J[o!ZLHeT*^TA72!HPG6֡w@R$JQa%&5Q8U WP^)  Q9BE 'N,Zu+¼U[BufP6()М_uOn/\i9Ii4"T2V<ŀ'V pLY+rBg}_11921N #묢'@; gvg+̭_J)>qxH4>T*' 1jOR0ZGĀU籲Z*Ü˄CȃKS<{gK-B;汼@^8PGW"|4X ԡKA omTDYD+=a7{]b맖l|70̲W0]^3@ FƇ\97^dٟey4NFVwN,~bij5/D=nmq\JHJJ3k ~pD'mp*F$D3ʃqWB #s&KԆy17zцGVd^$,2q$$TɥDE /Y2eIOq^YÓOӺDw[F>|}dL/wZgA"u*i::2ZŨkcr\-j*ʰ[-K~ >ס$T&:S)\s3:Qj:Ժu_;(YCD(P+Ԧ q%fGԈE2LrXQ+mrFjL0("jD=2:pBFQN8e%ey* sHͳ. ŰAx 6/Ǐ y~ps$3aĞm`M3zANz)O~Z.~ٗI ƽ2DJU72 vn0yc#/eFe :`uJ'F(V3!Cy_B ߚ8?v3+>4n>*?u)R g% 1T̢7IPN!9J'EEo_HMl>jc? bOy%R+3" ^…\xYJi. 0*18l@L|T5O2 eL t*R%VIE{ f| [Hbw@2D'RȣR>gtJ%*ɭ58$0s}ݙ\kwNW}m,Â1jz_Ƕ+y\Y+NE{\lԅΚ)~ӫUhr6V1i% DKFhwqѷ1.7v,w#wVrF<g\r Ȥ";%#!m82Hc.b zV$罥Behu4K(>x=+SL9cv>UjiƟ~gt)FX)SG], 8Dx$PVQAmSlϰ$i-cDM^kbB3@ٯp~ ,Ezjf]_(c͓ckٞw!h]䪎˽e:gN[w/jdQYlp22᭐`VӠ.0VQ<o^P⨐ s,:&%4DEwJҖ8-c9RGBN B;“{V7w$40~&2S~MM&?-vL}$ǹrR D%N5(s,´5.JcRQ OCv =,MtP+5Q'툱 ª Ma]LϦq a7Xvs DdHeφ@*tϤV3b޹8R\6IW9I7o%"C @bKT#pD8I@sa1qک_f8"qf<.WND\EJy, !N[0Rn@٘G.zwӲ(-7)8E(Z팏> LI3TV8{)q8gq=e*Wbfɱvl]5)#8DI8\cRiWi.8#a;$``Oab͎C~=< ?Ar\uFm>3K7d?RBɻv c{RLm˫%+h ep]a*ünR~H: 29-}}E$Dzld͍/Ʒ~ԧ6'rY]5m6X>'("I4*srPMt%U@;*E*&ZT2yU`)UfX \'D &Z3&{F#@_rAJwg$|wx+ε(^5cY[vȍ~8AMRk6wzV@n.u1ilkAZOa9lKvjBZ~`wiy$S귛M9o[-:Z"d=\YU^@zuӇޛJ mg}ZŵôgIFjҶ,β'n4Þ\2'{}j"OFZcT f2$yccmq N+v /`9@l!N p>JNaPFPqSoB3jwB*jC{q0[zr'2Ma`Αsr[K!bCI w1J>(L kdmFJ̤ޤHRg@̋m*6SM9R}՚ h:Kn3JiѼHMZA\5J쫧Za ꚜisM9&TCjq=*v)=m}{&wé5KmH N Bzl1F#1H 1-C,`S,uW? =MIX8evek  a|ab9G$"^`LtAkxXb0uV&.7ٖr8tf]njED,HQ:(*XRzZzt t 3IB5N"Q /:<);wwiJwR"%1dNF03,*͸Jb,"LEAyZQXz, 2CUN}{-.6BH`ׄJOqR@,Njd'3(XѢZliBC Rw#UOg"j%J$B4j$4 O0*MKH .λh HPOTqwT@& @A10™;%OBڋ4cN|bh\Y0WfvTzcg1ځ쫢֔tQ'%/N,wٕ{M8C3LpVt{~4XCߗ΋EJ*; dA#L9K)(ԝ79 =O VCpf饅 8gn]wQZF BX|w'gf`(S1-m{S6BggͰ(g0iE@,]Zu&NDrajΉrt19M[gUdZ_KO՛/'El09Zܘ?^^UkK|m#Y~2.o@^7RUmHf0d0ʵ,"; Z?%WX/t1{ofzmQAfɼjIs@o#!,>٠~oTNRuW3^:_:ˋۋ/~w8Ż_‰XqppUy/~ ":~)kjh*Ќ49o1MNy͸ٸ2u#$ J?dO.}7e~jFMWb}*<-27;Pi8n|B. Y//plF8Y&&cVH XK͉h-_r1 +^Y )F%jylu66,//f `0Fqp*(Iq+&8EZmF*}/GK{NƓ$Pm 'vLT TڽR7;GʃfW;N<+ĪmYv4vM M/aN(Esfͭx!td=ik/M;2^;{e(Q ! 2AYI2()H(@נpKծH 1Y8M(rnA:2,"3lX+\$J rD4])prc*cUS8n!츀zq#)\ =wRY}3>q9X4HyQ?ISpO;n+r'oN*)M",E}n{,;T}2 ~RJ6W2(@FlpK:+ϵ%pf-J:b*toҁ6e)@xBMm+Mwȳevw3i*_:l^gգ7":?,MQPoT6*W7ɨ_ԙo*LiRU)R;y JDU: n y*z)l\SSɆK0^xNs9S*"3x4jfJũ<<"G8BQ]^Pʤ'Xe)$UuZk^F 4%H!Nz,] KA5HB&0/ ‡4iơa9$L7ڤdJm7B;XNxA#AMXZ x1{*P(fp.!l PjWiJ%Zi#cAREAZJ9"Jq^GS^ 7c0""LJ-cMcipKBB뷪8. åQU4.9Zb)"JFC5 ci 8>%;ձvØF *|&Q@7$_ޢcaN p !)INZ;"* M~[yf.A9́L"KueaYv^&, &uD9*gFd&zGE.nsKW-wSC̄";&DYJD}a}4Ho+Ҽ*ߨn0LlJa^iGjd~rVk/H1b1Vٵ2c,)rUn{wsjMݙ`%!KK`쏫/kå#8|(NlkX ·wu[_/hhBݓ+qfӊy 暹ËNQY{Qsn_ [o /Սö>EXQɟQ3qgvE~|!&I]%劬ኬ[W],/]%p~)*A[e:fvPN3ꕱ+B8/Ia/]%pGήƆURU+N?w{Ur۳9'kP҈K8L~`M}2+@S&6ǣa/5/B B=2%n(;Wc\Z\>bb_ߟG"9e>" 3,fɏ#FhEV\wCmz_S2%ϔ7[]Tl!=vv2)np09sH[G‰uX^dvBΓ&k/9v®3|%gS4aq(,`^\|Ȗ/5n>{/K,}n-2!~~l4WM=lbpb 1ɱ>THͿyޔ|/HK+b4V/Eh5Ǯ%(n5W11R/]%x1 %veG-z CĮBՋaW \b;*meW] *o| v쥰-'ήuzJcUHA3u$φfT$%')ͼDYrn\.#tHb9ZLP['e:-EyUp{5Ui@Y6̜O7' -z} !&"EMtB a r|`^FwJнms'ˡ-SG'W^#S\}7 ixS1|cP*V"cȮsҴ/m0}rfMs6CzZ[rp|ߎ>n\ 3*gRbrSܷk'mXvѽg|+#"Vr,uڠa(Xͩ(zBRMe%T8 y9SDcK)b΂wDðqM m O ,;u E8V[l>f̹ WS' -'TÙ?HlB1ђ2e)c!,1DӔy #RhPyRN#>j:2mFg5/6y +G X͒*BmcE7ŊŊ{&LųmoEgV2tG)&&VQR1qV{t E NVFh&/} Rr"#f]ks+ R&{_JݸT aqM\ >45Fl3z09FwuwS)eI-b՝}In~z/= l|5;E\g^7I|a;dL/&ttOi99`RуVj.J>9Q cy \N8W#00#-y]zWG.Ek)UJrgZ'z%l)@1;T:`SBtC"Y81EV$EsR6s2/ ~'oht&Ξ5=X)vHֿ毧x:^;TQ$i/R1gĄ1'/Lb2(eLe&n^0o7GّlY]BGTĒM)nNZbNJڙ8?U]*T_?Q?[Ix2O 6ONTu1o܁-޻nKNٸ͔z;Ӝ\4 rorumGKqnN}x8[1&'TK6Qx'| ^i. Yky4FzH.fPN-q6'R!%)$*.:mSA$rV`1n$pd2R][٣^gX3[8Jֱ--<-HTmqUqK@bkRN}]Ťn L&&z;)kH mU@fOfbM:PB1a YZ;ycbK'a(dR)lJcQ>L܎93ul;giE1b;=ګ= 6&BE, TDb!zVh +Atj6 AЄ 9$/JjECl}46H-@3qvÆԯCda j~jyqxZ)5Hī%%<1ل`=:\:4>ycDوEɻ=$gXE2Y%r3!4Jt1J\۱EL=Y#@vV'_ggP(;b]|`4%̓)\V٥T~b"@-& (x {=#MTz*{\6{# hb_(Xjw;ǖ* /aU>YcN~ő<2 W֖JWBTYfgI T߁-;R AtMl+\GS]qq`sb2iJL fvJBIJgk F l4s%D39|Ⱥ3q Q旳/=zT OSZOi Ǻj?W֯y-8>=gvBJBGk0sEȐ.g;$9ܓ&Gҳte&Wu \߁2Ίqĺ|vIpGD8dK&N5RrO"LOjum3 AѬTdfM6 Z7kq<[U>+3ea3#N~#Ѽ?ZX7oNUIm*B'qu͜G_h8/!.n`TS?I;ϴ{E of[L 2rl[Αt6jd|Go3p[O Q[{Rz{Om݈n nnO >Q,l:^t}bx^{CZ>utiuvrq4ˣb6N ͖OeF ͉[66./x~Nd~x({goxƅ;{?V?,0T-"pΞ,?$OA{]KUߺk]+S|~[}>66dv7Kn@R~t4e\Dzwmv軽ri_5竇^س} ArS蓦iBSjR<ϓ. htLJ9 *Egg2,Α ~Gb ۷HlѣHlzq ^qo Ie)b%L K'y/bt**(HcT3uRPYꪔzn_~QÑ.rѴ^|>BSI3x%7I`īI:^KM7IJ.P};̀ 4̫s7{>O@kZ}\2O/ǟWB<|aUFN{1Up8_޳Wx/SeM^+(됔92C:g-`09,;={zyȄ\BG#*?O{w&n ފ~z|9-r[Dګկٵ+vNp+G 9´Nv,\֧_n192d+DTQ`P6jlPsc{M_ひ9hrƣ6 ioQZDt)JIhݦj%1XMpY9c.2KeN2B ^c0 Q @J 1*&0e7ۏ=tI7~^(U>_5/MtgEwc四w7ޣ[w>>+]7ϼϜ߫\5'{{O\?}Oѷ$j_=Ϛ=|"?i{!O5aɛ._qyGٙ9x+bFNaYff=·]2 &F/ {1]F ,JVhm%"V ,Ȭ4D5O·=43hW!$cA'l) 2}&愳pH 1sJy^5/cLJ3˲$H_rT:r3$%y{Šc=ұK\4*~}mA[n۝A;܁Y}ni 4l4-LK O FYY%\U |셜P1yOHH5S~/GWwmmKIUyxwSurĻ5W1E2$Iox$"H" (73=tt+Rٻ=W]Ƭ?e\ЮuX]1T)b!逰 ߜ wE%<؁X@Uvηo$ 1*7wE_}6d (|FElNZ2?ÊC`e \ |ҿmcvq4<Гo yX67镴zeRIҶpauAu9v zwo|ԷŸ#WZhI>.uMK'צ8lO_mPӆk2* T ss|N8-݆k;Jas֜ ׃ņk$# +0FQci$l1eH !:D9aDoiͦ9~mKIQ,`Pݤ*"Rf% Z XQi KAThªFyNz M68N8oC' HA@^AyҤEomJ( JEpQ'V8t۸¤,ǘ,nγyL,Oā5JRj1h4 gD,J N "-X01*t_-_rV`ax:oJ i%V͐rg:܉s -O\8O?s,O'XN{"y$0}%gS4awc7NnАnܹ߹.yopΈBp/k[Œ-r&ɭ2wTpGڢ>26s|܀' QKHGNGkR8D4.hd2 ',BJIVzŢ E9Vc8[e,oꮫ]Fi/>/隒8Rit`-`&GA-0p!vQtT1ӆ4uả(dy)1tp b`Bk<0Q.)DqdPHS&\!U3Gb-%T80اmiSNk<1Djc ##2d[XMML6r8s";⩇xg+tTjGz#a:lrh e(Kf?:橅yn0ŝE q:1s~[R!)Gy0v`&(rݬm72Jc&}1Yo[lU!)C@m n"A֤/Ai/oi!c%Q"R!D=pDaýDXPp1b4~Hy pԃ;)qt1-FT,~}glͤe& `B,z}Ҏȇi4X߃$}L,|300ᅴ yq~ > &b#Xxb[~ix=7vÇ_q~F`=d{sهϾ.SJ1V6ۜ!dr%VCh>ﯧ*El-޾~?FQPR=P .NG 岝"0K %c;u䔃NpףEkJ@2a݄d%Kx\M_f"vʌ&?+쯧)oǃ*J ([F7*J,,}iR`w sg0/kgdͰ_뮴>ׯ53Ky?yXū4/oؼmKxszSOvD9/&' WjnTt+$OM$s@/pޤL9uQ]H9R* 5c>1ch-ߔDo%8ѐEN~S%ۭqPS3x<; )O.XC?pH,N&+hW~s=5ȘVgwt T#CX0-f;yt%Ӓ 6 F̫[:hyn^xMӪ:[H|}E+s{N3 yw,{nrūM-6K^ (RQ%,y#JuW%Ha\T K$Ü+!t^ZEUTZ g@>1i:ke-e` M>jb wݔkDh:f54rNhe͜h i k]`Uk*%-tвURwtut{XJU{+KPkZBNW %]!] !-+!$5tň6J::C!Ѧ-i{.-t "ΑȞnv9mvָZx*3+-9jU.[MLhj:]%Rvtl WzCRrv%{ij?TCFWx]:ao~ 28rjƣ"k1<KpvT t L!'h!ͼ8g֨\AA!+EPdJHolPtϧ4oÝ3̋0/vyhSE!5!X[\3ruT:a-6OrsX:[S`$ ͮ(=.(IH_tF5Wlm/{[֋ǟGs%1wd c%n6g{+L=ҙ0^0i4O`H].VNZ!flA,.V #=|cqVB(uE:dMgE]/*Ni*r=dg^C/^mKo%ϏaF$wSk?.х:&W &uD9hcRbaL[qGhsw@_&$U^ o|`׷r6W*߫i 06T1xU$_d}KݷZn բ[MŢ~!%E{ɱ.Qs-9˝R 3<2ׅsc̱|> cj@G) +L=2E:x;sCZHo.i LCw=ΆSc)5{Xabo))lT8 ٻ:eQ^Z(\ufީg6;i5IyORxem>)|O,96{ڡ%<4g;3R)iQ 'ώ\2. cIU_yOp߮rS=;)¢K:Yg%m1\S Q/c5"JJ\A.a>BdV_aLss2̊4J ܧ6Yr h%*Kběn%vZrTqUyk 8ZNW ]!]1-+n ]%-tPRU`ۓ VZFNW ]!] -jS&wdbIh]tP.9ҕBiWU+Qk*w']⨳+~RWc ZCW .m7>]B)EGWφHš'=SEWVO3ZyP%{СSMZDWqJp%o ]%Z7%]#]`+[DW^/ZxJ(w'8F\n{5ʿL_!oɣNTȼ@w+Dqn4H}t"Eii9jcaǬt1&5Fo=^RgjzH_!>NK_ >x/(whRwF%5et~⽬*&ڐnPa;6pŘEutn BIܖVeqӕR;])4WeN#Ȗyp R[+E^])h'] ]yO])6CW+FBWñӕ|DNM ;ڕua+thCvJ&t+~+ŗ_j?tuܗЦ*/9e#N&@W<{ ѕۡ+h+th9vh̤++L/^{dBW@kѫ+E)~)ҕ;/m~R˧af37g(Ҡ (A g >Q죽GWlέsz$g-b͒ 99Něqr 7h]8v'(CN8`6DW+V hK0v.<"]ur&mvԕGZOQƹ ʻ]pa3tp9lc+E-6+x+thB&]]EJAf3fPtt(ݤSOv-@̵]3^^/ǃǗ6M/KWu62n"ЕLޡ\ Е¥R屙IWM|7hgn#mx2D†'ӋN6cϟ2?O1Fy\fqL%xɜm^-Y,/y~9hZ@iERC)f ҴDNEdfJͬ e:UE<E`vԕ 9dMN|2oi#mµR|fPQʤS]pbPn-"E))UL>%!) ].tyah?vRn.v=s+c AyAžap兯tZBf0@Wnw=V+Nq;tE. ]KP:tu t ={'ER=rqy-Mo޼ys? ~!M{Pw(C>><1,!޿W6LJ7Ec_hu^]#X ~CxMgh_ mUwڡdoGjL[G -"LQ6\F%}4\lE(v6-@BʕcnB!lmlF2i>sť&gtddI4Is(,Ae) 9魢Y!vHFk 6X"FKD",RģQ%'hѵw@W}|IKc8W[piv@I7$3RœDh1=Xz B8hR@ IjzL-x}W@x" ,I~WB4b(|މɰXF4ن06[*e`039M~B11ИUs#QYGoMC#FC)Y6_^7L$weolbQ=u*L%;KT}gтOBr>=7!ΪJ2ۖk#8G9ӑJj6S4{HT5TcpS,&Dnws=!v2ΒcO1Ǒ֑-~F_[2Kh!5_*X )%$ښ/uߺuSLEOW͋k(Śa|7!'ȃCK fn ,(2>EjϒaB,AN# B|Y*MVF'> rM r30أYԡC[BZ 4Bsxj腘/+$a0Pߴ+e$00!-#1kY*.t`6t'Lixu bGCVj;x{ W!UqָnzdF)Fﺆ9ԠmEX m {Z $lk]);Tkl#\-"#1RLl`6  -G 5G1껡L _@ez7Hᱜ6#,%USJszj {- ȸCæ|"(Ɛ@ J! /5 |uZ Dk]ܣGЋqiIb6$!914: z0U8!a%G<Cg]xsۯw.x >UzxԽ dLX 3 b{PT8xiLMȦ9r jm U&1:Dd2X`dUqV24ITL&ȼb>.02X uy@{:K`r@!Y#euU;Bdm:XPtmOU;g@vV*ie6A?%n%_`."On T#W`>]b="eDEKrm xCצwKB*q`$#(<܀D4;\2Xjpⶦ3@hՎ a<<DC7HX] @.m1U5H#VF/(yD $2Бx5v`m,BgUiYT ה!#ȃ 8»x8"6x,},ʰ cA8P'Y3 RNXZ e"~C7ڳή&(l gV oT* ޚདྷ"EIVN  |4|G&!zse_{ Ů,>_io.LcL`0u g9{14z6\{ TIo-M-iVnQ3y&4zex_eF.KoZUfmju  '? 9صJC6v#a&:Y0ݹ"SAS! ؚ Y E+PCzou]"#8 >"X$O: OE#'!寺CE7 VmQHR8(5#-guk[s [` <4'-1\1seg <֪Y u2m(v=gev86 TkeOk.9rz&-I^u *6lLVPxs9G#.^qV`o#0hYZ #+Bf@S =>\2'ѪӢ׸zrGCX6:a$łn-]3 i 6_5fs˥2 /Ƥ1˱XL2ԤVGR! N V3f !KkYvw]]~Fzѳ+}#7_6 cV/^ruu{p7 HcDNR3YA/[>`_9,U]0}~zz77qwnUA7i|y;۫?57Eo/n?z ~1?߯nnIooo?ܜ~M62skmWo̍~__O/z~Wg[77śжmg? qg=_=;+Ax΅WTUI1lvT.opQ,WY t@.r@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+~@HFlT aC0ܘ6S V+¬:J $df%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJӭ8R%~v*t?{Wȑd o>3{`c/F-NS$MRcS-"TL7VY/2Eg@+&.J3TQbH(((((((((((((((((((((((X DuN ,J%զJ DiMQ%ʝ@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E T@E tFJ>Zޛjfp"(wH)Nq0Ξ `. 0$p \ahlp PrYhz ݑ]+ F{g 6DWV49ҕUlwHޛau"BGT"q Bo߬~^.? )P*D"d<ȧP.7un &0] GG7}_ vY }A^\aN(~F-i,Jp+1 ^O-ҠP -kjT^ջ^{Z}Q Kc,~ QbT#]I8Җ63tpe]+D+Zo]!BWgIWJa+kDgvt(ۦ-t"tW^.Y;ߡGXWHWHfM K:CWx ]+DkD PjYs+1Rv)~Sҝ+7DWP6 ozu: 0LW Vq(UˎGЕ.tujS˕+ ]!\MBWֶaL+ә9􁢓`VI'o8rޝk*؈.n2 E.l]F\ՙ{e" Tܦvh m߫7 /.^p޻r~C<,\/_ۛ4svKWa }4Q_L%UdeMu  re.SzXwF?b[~C Sܕp&\?B{7o-C#|8_V]hO۴ėUHɶ6O~J]=ۼ}үFW))㍍u$<>XGR:{BF5§47fCtCp~&r՝ ~cR]Q-2W!FS&F$a0hΤ3%sx="Yv(+vGmtD`4Y}el߫tGywi~B=~mWoC,ٞNNhp{߽ٵnzeޭJq2Nj7<0_%`+rO氷LSxzy_YK1Cr!,++qT0WS׆4cLwߛ+\7x8Rܛ\υHQMbIPNBv$ % ze62x?$=.T(^7b "ۛc-7{[xI Go$ū n0@iRxL/Vp0ጹ,dt=@` Ղ_BEkt;)f4L7sϟ6Wث͏zբyl}JPºYvﶭw\K物Sz%/Oo2JΖ@+I_#0XIT;O8ܖ3NY4|wjcq~WsXYTDuvXpugzIpK-Oܨ-s ۄZZ!:2DEp&m1ḷysi2벧MF^Y؇n(ws؟^̺`]NK` b=p6i؛A:4>\qH׻m^[{~6?GͿa"?;{˹^Sn::hvU2V3KSRW$}qM0yuSM:9pKM7#6mfb⾒kj(կk:] JBִ=( PFKP%q{3TU.;TPŌ5>OTsl4+M+ᕯ͸`LL{hDx?LF>|rfOw.$m[{~rol[)|*yu3wH:!d64wQ Rͷ{1 ;Ye/\<{$'8Feޱ,ԵEIBJ0]`Hɉ)0k NF&25 jEÍFQ# :OpN9 $R0H,MU܁'(\j836g4ƅ.|V.\!²n24XZT?ٟ4ݕ?OP`aПξrƎY8WNjQF3xM2Jw9 a&~#hʍ267< tA{&W:(ᕈ٨3v̄MªR&A183 ㆋnw..s`KTIf(sB#emtQ>Ro荄x !#)],&K#p|eh0ck|ǩHfDZ09UDH)^̝il6He.pڬ8J|$Lʣ,#I%E5O&i*6̈9@_r=Z 6u6KNE0/‹xtdm2QR/m4zp w) %MJ LXx9x-|Xh (l=,s䣫_/xE:zF[u8茂0:f :ڜ3G?~ٸ0GB) Xhe#Jy&^0 s<錖g s4WT%D,͇92hIec:*9sEՉk,sƃQ "F&,a'9b2r&v9NW9S0cQ:)=oS 3֪,=Ђ*wZl+BzMI~ Ѹj v:NrBrrKw^ W$HC&8fZmx Л=OB?EDnFIzHЧS0">` Dm9ԍ6ނtP1OEv#Voо+I7z&d DQۢ:_X(k %nXmUp X77)R{l.uawaل Z>-XD;~eѝ]r{k:0<5ȽЦ#3DW5\n>2zl[G  u]~L]Ss8}$UK_uóWr^'~<7RhO~˧2 7<?|c$:dFHzꜳD .`[/`cHJ#6|DϥcOSe  Axg|0y$6%trckE};6<:=4pfwmH_m"Y ,23, -y%9Y~ÏHcl9<|VWd+6:W :" ;AYϨ-TdW0Kd$9MGWs:_X9;W6?4%R*.rš=t,jS%+܉}<'a9zIXo3 !|̣fV< CEdA#EF"Len#H1"&QQ$ol:dQO]'OX.:$#Ft¡c }Ҩ21&0U%2Rc4Szw1'wL zuNOz>k 5ZgLs շX.k-N}y^XƣfmΙGCy4U}At}i6m1 9Q;əmuA&ȝWg$"=2_$پS] Wν[m䢨lʂ02슲gF!]i63Pxv LT!z$ek>$1-6A$HE/r'OG j]JvGM͔g-p_sx}N_L; 1/g#AO'E oOzE _c2@  {%n;qn;kj"yP cXVa9$UǤG@% C0I4gkxbMe8{huCd 2xZ(s !AL ]Y3vL6 34:}"*gg#/+Yx~6n~=f~KM|6ؿjv}=h~SfVhY-e3o嬵/"gٟ/XB֯2뾘C: ]re: /*y6\74p6;?s+,{8)BI7,8,%tQ$iR]ޒ 9ڒ\Qm*S &/MKi2dX {C/1X-`uN)CVY|ڽ%,kqW Ǖ>l^{3_\ާSo+YU<=gy"7lEi+Ks&hjExezA%8]w7A6>RȖ&g1dr LP=p#PȀS{?p~`S:^atFe];~'lE1![9i31Y !X\f!юIknpNt6'ɞ/'|:kGrrۡ=PY2oޑ%Xs)J HVqJID]a"镽ߵ;Y" .$ YIoFgʅ ɱwȈ{Q|  ]=C ((~%]bV:hcּgY+ MJ"(ۓtGh9%e$/.J,%iސ2e֛%Kˡ`0Ȟc$8kU4gC.K7UDj~ ,|IĜ*tqW9'BjW)zuxN3'fQ2z+)0dI X<1fz!9]OM`܊g Z_ i9ЊV~04_^STL;̳ch"Uۆ܍$R5gc;w_Ѽ1hp6j-b:<sDZ}-{rgK;Ю"X2$1> d1;P?'+&*hҥBĸ}inK m;moGyp~tgǩ1j6.h(3~lRnw$Nn 4haa}n >t{V| B Fc#GBZN-0 Z&A_7>P@¨$4j Ҡв "DeP r[k|53 ,J+,ۅc[A~x0uk!]fu)n16\L+sOS;5Xe8yQ ne\:2$7!::M6EOSx&yygox?I[0{ɁYh{\/<ĎNzFG19rYa05ښBѧqVČީ.DhGJEDR dd)U61YR>s.$I;lV>86v'4'jFKXBA P*H&&,%cBg(ECt&Y!mYMDlCcG6j BȜJr{BXH.A„\Ur) HgRS侮," $B()J͠j+T6Sq( ?fP^X!,}@WvIk'b0贎 HKh+Y]`ÿN_]ko<=lWw6AvWy3ԅgY3BxB¨ @?ϥ._ ~ #o̔='3,u#Huzq##v &Ze p8M>cCAޙ."U16`<4WlsJ@'rm)Y!xo =V]loKɊqQ& 9520|S1,BAD_I"rUd!OEFM >&`R$6@(եcYunz+Hn6;9] ÎҘUPf?Ni]$L!2c؋ ǫW^vWGͅ^؊^;F@h%*h XH5!M(zf Zxt)?!OIA =(Ƒ _<2C-:%eΙdBɉ>v2ɍ;صb_0<Ogm>7k}*T}vKq-SWuGA68ƝhK*B}Rwx4%Ҩkh:W 2,ɀ RQh]]F&%!%,*NtP5H2Uc@Ouaʚ(]r"JHu:-bX;!}?e$OGH[;>[Vw׿JfQ5О9 S~劧XA!S(YDC/)'ڈdjrF'ԫ<_At )euijw40w"!*eKEV@T^ty*\t\h;.(GO^#:GMRWB8Fxف ٷQA(^Qc|4$d*Τ~'TvF[a -6mG*Ekno ?X *^B?{㶭&"4[lvqb?4AkƧ)}8xbz."GL{,KEI$ntsAؗ8KQX@U6BZ ]i[paq3νLOf7ŬX~ozo+hfE0?-bK7gffvQL b̵ 3RY[k7lL][x6douwd_'n'f'Xϫc޶fZg'@9 nFj$al8/o7濭KͺZt㛟6It^_1&d{dw,/ѯvًpŹ-eJl[Ulbl=o#Gz9(#~;??knLy5"_q7?d]f˿/< )0p<[:ʶs8&>*p݈NP; mQx ݷ/PRp9!;R2 [EB23 nMDB`lrhmm/,u QWJ2bzhjWD+]%JKBTWG+.t]!ag{2R S eDA&] > )'] QWVZnbB`t4"\MT& R!2  ϓA5Ԯ1/~"u?]ލH ׮;*Q۟ѕʇ t]m2oE#rαye(TNL|L>ve6`UCC\^a+QV J;|5Mq}#B#~'[*yLNle4 95Ґ#ڇ 9M>Ć`%tFMN}dEkyD)eucZD+"nrVɉ2ϤGѕ™ZK]vEʄ+4IWCԕɘB< Xǣ+µ,] ^WFkRue,  vei RuSqţ+y4"Z(UՓӲ؈yAO`|~vOJVu6ꜬESyj+B$] PWB[ܢ1ՍhL؋HNL'[pDHO@IɱcFIB^ni3X0 WD!ZC4Q4=@M銀^y"?_"Jn+ Z<"`O'2>w*Ru& WO^0._yi!tZyYWRJVN|}ECܡ9B:[e\+6R4 utz%ДXWXjDнwv.XXcU   \"Jt5D])0ڊtǣ+< Zi+Q*H46,HWll4"\MciAuEoIWѕQR0RA4B\ ](L¶`L!lL4B\'i  1H6] QW4b?B+ь Z>(Uѕ<1f?vM~Ơʏ[ZzJ&]\+yD"`ZpR$] QW*@~*8.1V1sUbg#+HKE4W.{+rvlD-, *bi[4Dyj E9Xq*~sE 8VZɜ<:dK#BXX2!4JT&"ғ` `!3?`ǣx4JRn!JOW ])s i9T)3IWԕTLB`≙"\ncц+"j2Vj銀FWQH+]J*+ԮDN+,jr9S튀M<1Ƣy2H2 Q*tdt'f=Vld݋5 x zd:"]0htEƢ+5"t]eh.&]=hՃYNhҥw( 4 qe уga+;R xmt}1u4U*zz)9K͜!1lDB`L4"\cZ|3(Kh#> 3($] PW)kMDB`<"\3/?"JCԕaN~z;¯]IWԕEuu_"\Mh ]WH)R ufFD+=D\j'ZBRJnԉYF; ][ճp^[؋[Zʏ2ڕЕJ:7,^>=pEWH mmᤫѕNE+ LxtMhMBJt5]$i/Pe~բFv,Myd:65 >Yba_ z5 W KLn+*Y[N(?P:Ƕ+Kt}9*yUr\'Z\e^u7 Lܲ|ei ﷾쒶D Rlc?98VXk7V.cn\]|b}~m.JYŝmR(rj}+U8(Sӊe@UTu"||>,8(J ~sy|j =#?CL.1ai~@-X5yݼz^-0b9bmuQǥk U!Pm.Eu@KV6WZ\WYk7[$򡶥en›hŚ7rLf|Ofu -ڂW9Ac 7VBIQwSIT`JWN0rViq]g֥q,Jඑvx yQIɊIͺѩEY֢h%nXsh uQ{1N6ֹ$&Rq}^ZӴpH[B-+)0 Gi >iMM%TXfK]ԘK+ .03-m͋,,*)>JJ$j~yy5*ձԶJUu)6b3hU\kR1Táhkk^ eKVx1c6 똭s#J޵qd׿2 dMA!ʇ 6GK9gH-7Emis3=n{nU^3"Z䔄;zlQOՉ&:f%Uz݁6l;6J#TEt̹c031M~Bȥa1N{'l~%B!vEtBw(yO~k6YRXYnj(`ɘu!gk'Ss.>7'!HUEzr7NTRZ  $Y1ڢDw$ףI Yπ^ܷJ0'(mGZdjڤТ-)$qu$~JaE)U>9 X )$Z4XLզlR\=D@IOt4eּXt):Q} ]&|J'{XMI1!Ku; {9kJ!B QȎўD }m.5ˎ0H%G-/VQO ָhs5>#;dZNtjV2]`!00 ƛAy@xCv[?o:@GVZLAG=$]I4T*#d`&SQd4XQ|YΈ ʃVs I"9d^U0P>xmBV5 e8 xѼ<@= KH`Ge YLZ[]x $nGfp6᥇*YȩՏDC}^jygE3ʃ d-Š.k Hb"ҧa.FZ_\ޚwy4S U%VXB(;KSv R{ȋY /ѡB-10 zv5Ԙ jYf}Jh v% Aڱ", "+P(vm!Y5MH#{Nw=/ (DhQ:fƮ4XdЙ$ fj2RA ٕ P?AjDPqkUPaVB]$1#dlDH!(rĪMPk>Zl?XI+,I£j4ZIP*HxnQڀJMKK[oU4^"`!-AmU($`F}+M-)K `6lmRjh\1ڹG<6/]χ,|.q|}8Qe fpfU:9 6IҢGJ${hPp$JHu6tk*ZSRP'ՓFCoׄ bL]$fz4N*3bO*Vh &% xKd]ж+9]PnDVCqnQD{=D+YLw]zʠUbF =>A( =`-yÖ Vk XW6b]MA57Y`g] +U@ &eڡPʢ#6:XLNg=us[sOҕȪTqQc@sjo6i]0s0Jv -jYTAlRԞKўI21KCR 9Ok=K8T赫-&xZ*;k*އY[Tڠ@4XAVT@9 Z6i =W&EBb?Д n F5>dN֞ +JOQ!,J ]\1Fn5 q!X7k̦r͸\"AC, c4&ʥMH28DiBN Zf !Kk|A9oU hD;S`]pB&^t˾?(x6 ?m  ]AG~@q˽ϗUmSPY! ЂYh#;+:K>Jպ+hvIt-^6np^|`@28[e}l^_x!?+"D ѱ)J}߮7?["4om698r0HL-ER}zTKrpqA 'pN Ё@B!b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vv@Li#Ze D ; h@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NuBQr_pcXUɃwg>HN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':։9pTq):g"2PNctI=; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@Gzh=;{K:jJ׷o.UY\na1${jYs%ܛbl\ ~>ٜ_и4tYה.沞)fıJz''*6%y~qZk^ʚas<]^.\Thg@@_wBʮ,D!xWߝ~v6~v})1_ ҕ #?9[Yrjj0Z!zkt9j~E_/m Fn_C5M}"Z9=.|mo M>"‒."D[x(k+sO] (3n+ע ^nw) Zfة݁a`VcMSf%gB gOZoo9m;H9̄3)NF?ʻTWz1r)5 9B';BrT "^,׈"V:]JZvAt,uK+U4iY]pf1tEpbv;xuE(=1~AtE\ ]pיʓJ(%=(**(/)  ?{f}u_t\7-1X߾Dw\-:YoڟGg ,[A覆 LC-U &gHtX ~ܖv5/7W|;~AI⡯y]<c O7_7of[~h:ꭆ8O&y=Uiٻe*@݅3ڠ}i-3 x|t?u*7\]:P\{ 8j]իO >UZdz-yM/S{gmɶkNۙ ~ #؄Ũ5kR_\(YZFGs=p81]Zw' eLWrzw*Փ;ž}du4Q~zJ/aѕ{]9zisfAtNC+kR:wtE(=1ҕJ-:Xv\[`$ۓe\yݹU ^/(WO? :4s<Md<3pOt/i&uoƹ \Uv4< >49 `)*Cp^ ]`R|b:6aDW/&a1tV|L(f:BrN)-zWt," 3Еw/,p+r1[ C+TcK* +A<]J혮..$QW6|1H(#ЕS!,? wzo$V~= 塩+LW;{A\". ":]J홮Uzp?iax??46=Ay֓yE>I ãS=!xۭ7o1y@we/~="˯Ӧ|{5_O~6 ߞq}s^Pݺ5<`ZC* N O|0좽?_a _ܩ#w¿l_A6>?[zx_Ão/Qb=޵qc2? t,E6m7ns$608Ȓ|{8#ɲi+I3!y^C!|m:jnc b׆V)x;z"nI'=[7lJ.VO qPA,<>XGNPiu(6Rk1K:(wj 1 DRY^LPS1S) Βv>29y4FGnP'Ɔb~Sdag4Ku Ypd7߭[?Fh?z6f ͉urӳe==].{qXP>NPծ:.ݤ4Ɨl5scZiNz4\~ɺ=w1(-e>!ow1&^?|\I$DFbul@ Yͅ\Hj*\HrEɑ$T(GDx???|.7Ἵlf;*ϟ{vo>|C7lG1HbqtpaNF铵,u_;ιE" l:PQO}a͈k<>Mjm1t_|Wg]l`Z\={:5[߸'d-R?{C 3~k VVl2MR͟qrzi]"{e M2etk݃6sQA˗Y12:g~n/C]3v ~=n"O]JJhi .CtR+e< ,mPSBx4$hb+lCxuM,#f34@[4 %͆õ VVB{6*8;xbe# U"5rp7sчhuWdxqkom#^~^| ~t(E[ϱ`݌ z]f/X&>¶>5?O[Bf./o4בyuG Hö EGdHJ]>5}N !*ܬ\ޕ?tqܰD}UM;pi5WFR |llLʽdҮqi_MR}3gwXv؁EǢUXQ-F1ƥ+ L*Gd:$ Buӝ.hdʱ?o^yo@Y"1Z(:%$cҞr:#:Qe\wr{;^xѭy?e'T ˦n΋ F6}2Oh0*CJ(0BHP4 %`B!2T1b % 2-In 4($u 7FPeZ:ȹ_P}^)('ä'y9jOlE_OnȺý^|~QZ2vHN.ƃ6UUkGYܑ'uۯnW9/ Ղ[? ?p|. )u֢W"%':A`0Lx+$e.j"3WRܺhBrX٩fTFBI+DA@`:&YtMJi*(%TR]kȹ_3vt ;u!'B{ԅ Mg3nIDj gVWUo^(p88T\c$#9ΕZx *(tB$αh57Bi BTcsÓa`ϡĽgeɕJx%f6؎ J t;#~ǣ1Ekwуml pNGxTXBx&IGT(MұNa 7=CN@bLP8ΣFTvc}9/Rb<Li5Q#nx4W7D4"%`u-g)7l6H! v9J1HEWKY@QH ͝"ľ5bgl p=Ū !:;]"X/^d9?J-Iۉ\ԮxJS-;f$WEFל![ G_9j/OShCG$L!*OK^-!1@XJSh:9[$:=CJw?Wbe*ڗAB[b3PMߍUʧ84Y-,i"d<:T,qghpj(x)cD>୴FǔIRRp=JȮ~Mأx0F! +PHR*j7ܻMx#Kgv3w!;Jx/% ʆ5a{2R֯o ͙(#P dGJ6QL/Ya-n|ɒEJa--W ZEAdpL@錾=S昀dNS%&RFp1VQA4rtm'. ZS#NpoleLw壶,]XwFΖGb-X.dZR34:_(qܣO/@X0\|ybkmD($bFA3n  )Wxa8!N64qd q X@: "6T82XDvŀ2Kۅ/ǚqcVYx8< a݆ѿ#5eA6Pp&-qB2#(쟯x=r2MF!OXs*zs.D`83&ey̺ u,H2m*0HIb!Jq71)?P_C3ޅg^.\i[2Ro_ {8K[~UZC (՚x¢~so8s3y'5D˜R48?[wEs2b1*.NA 2QB9a'VSN\U^TST'0lQ@BX*YhR`XOB+$7^ A0nSn9;XGzસpߨ_j-GN.: [M.5u4+]3gƄ~Wy=uuS.x6<_u8~&}p[nv%8Rƣ_g~)ᶖhmI-5Z\6,oPh#Ƶ~̣x4Z tu|09*lklnuɶVjr*o򄅌%I}b6zfo߭T *5 FMqgȎAut_yv_޾8}ӛS=?}ZL!KfZHM`z4n4]5-FlѴ`]z>[ky]niӰl4qu(8B>jWx䅖kxGOl$a^EMϷhg!Z./@TK= / X߶S^}r#>M|F{CwбV2{PeFR4!>PtXEƽLG wM ^%8ќ#1)@@ɍB6YHi[(qg@qM5J{']Vv[:kܪc8Ƀ\04Tns.#yAN҉`UɌpėy? {oW)hJٻgp\ОC3DR'A'X`V?!T #"(9oC6L.p $G$d3^o5Aq]sst[ ҍo;-vic 9bz|%e}͝&: $A~ޓ0۳]Gs}Tد]Uǧy0O?!'1\fbK?>i+O'/\fp(W8O"0#x/M`?r֊憗iPCe>])EM")˭jZjv?PO~;]A6/MW@,['Wo]6H rX~X`қ*P׺a/^6ꯟQ?/yUCh`1QҢ9&Q}lxb$)mZF>FЫ͖$h<}ԏe Pł}JwqШb"wP }@tƲ}KX,]7bʀb]8ɿ|`X+1+)*!~[KQ2 Q`;D6:RS{Pzueui7t:u߮Uk@k8XɅu#JXO+O[qH6WM֑]W VȽ'UB5.~n! 0fflMuu5Y=>@6 }Uؒ`KRx|p=o?RNsho1H[<@J =\Q6dSyeX[Zjmc_i{6Qmc3Zi\ILG0NHR22_~ma-N9)RDDIΉI@ 5ѨӬI8ɺ'v"/GZ&:Gu`*z,sh ZX!D%&k'O1hcoʽ87Ӝ~Vu) 60j9/%˸?#R.'d\kf F@ăH4*h8LPsEZUbfd:(M_"x!z*ZG 1hN #ZQWMFu@vl`JgzK4P ))g./Zu43]fY*zo3qWi%Op&-!Ձ 2 'M/)2J1R(YDI1PB99 "PdQG90S0\*QZ%癧M}u[oxQkC.\PE,t V)䅔 wTR|1 Z(3L>\TpE)_y7t=Bi 7YU5h93sH&"AYC z?m)g֡Rά]x :'89<+B%{LsIFx""vc2ީ*)=֫r~:ƪ%SgfzCT 6D}T{>)XtlE;^w7qThk{1_|@p'+p1G< U$V۲) _x`x-˔$WDw&;r \J|C]=)_Ѽ1rp=fz8H>4o]hS} Db@XXGK'#9~ ޹)"dVG[߈b;0w_O E\be}&eϗi\)31U;!WVϽ;7yi<({Yf&* KIr`ABZ~ϻ=5 J "d21dF筴hFEMn7pEy$%8 ز(e)h',Z{CE &lNVښ8UYyeQh@hT2GKBuZhʽ)dβ͔eWS(K7%*, aX6+G5BxF2@g$*E闠N\JYW RpR%{NyHB^-ggYgA1J*@ )NS" []B䰸@ɦqx>~,(gB{Mu |j7_f.JxKn>^5%SīޑZh`"VЪƐ "8c4˃q$,ff1%Z{OGf{wbV1̾m.?3L+φc_}u9l:83q/Of8x2.YJIaQ`i8:Y6t1 Q"L9PDixf8wwꝐVJq>;&x2鍫^OKǤqMh9 2 T] >^_c'R "U"(x1x=+J?LM?k8?);cJܾCH(YUW#vLb|B:z>x{2Y_oAΦײMLG^JDo@DŻM&[8t< 1Te$OGlQNx?\x=+m: W^*HCЉDX*JK(hKjkԝW/&)Qz nV`΁Y!OY`aps 6T+wkjQ7aPݪɿ OM c ߝA^Pquxi6 ETq 6 2EjJI}n9ݝt6nN^7ZpxYE?{gFn$ҧxiݪ\m6pHZӢ{%)j$J$Q̃ [6:pCi݄Sꦮ &z=6j1^^^򣵍7X6ޏf煮DJ'lꎲcL@б $JZʱL0c+L9V: Yهiy_:4}FC5jF!MjU-:֨l?N@[D{( ['iK=u:m}Nw/ Lucph:LXގ&d5(hà֓x6~4z&ޜ3oA7belo ft̸t3j4bf\)3.L ϸ02@>tbtqQ)bZP)/ꡮ5 KOp/]1cuŔN]=]U*'x-]%c;U.nuD{k(mfAL]ZZ9@%HWu>wU-d+tXtC]!ōiqI誏r_.Ϧm,Ԫ|4}y!//GF70#NK/;%@ ʺkt,jyWwM.WE7wvi{i-jr%{n%__.XZ}2xz5z-u c_@gOU ~׫~:QiJu O?qX'b@z;j 881q㒘Ym O7@|+F9b\B)픁uŔ`z+Id1,btqH rUZ]PW%EW Lrtqm hŔDW}UlzJZQHbtAPyEϴGWLi+4E(d(b\3δ+ }uES*#gEZ1bZ>bʲ銶z:VV~* )UXWii5 ]Qjתז+F#FWK^"AUuel銁+%-EW+誏rƴV&:m!}1bSVuu*tCI{zOOTY}z>[,Ϛݸy_ էf Wy@_L' zs\Z\RvH r1u50QT ݐIafƸW-֣XjyZ475kMTeVPM*>!]&T2pױQEA ڍ@|Sksv fŪc]"Ƅm-47Čעəmp$`<9AESi`i+TtC]8 u^10y1r RtŴ]'QZ@ ]E`m]1.Y+D_tC]g4JY4ŸJBAte0G]"y*] 1b'MKWVM1e RWxIS9sW+giSʩީc-ZWi:^]HLQz^WW-ڥAb`btŸ$FW:ʠI誇2qd6Z*;%Vew4H=Ҁ3!)':{)cIZ'EO4;!xmKm"Ђ4hq;$V+ҔaN5 rtΉ#EWLK.w]EʠmUuBN"0 FW񞼉.Id EW}ԕ84"ytŸ^I"+e0G]WW؀+b殘"UPtC]! ]E`c]1.]1m~$SRYG]vLNNtŸAR>w]1g+ecE6箒Qu|p"xvDJTNЕ.ڵ5y+z$ܠ+>w]1eEW=ԕq}e$u4f]U*8Up:JåIZwKen@jת{9b\)0w]1EW='62K`*pyВ Q; ZLɸ+{ɔJ3v/uc~I'֟(:Ao.I(6diG )1f\0R,ʹ^nimcX0o* ]1T2mй)z+AbgIå EW]WLi誇6(ADWuvI-QΔ`u {#FWH"-)*ꡮ(L*-gAZ1bZ}tŔvѕt]-+~#$`^FNi4ʐG+WtkkE+A?OuŸ]GWi]mF誇2]E`I-EWL(w]EʐwEWO(VbQٸ-Nh)_γyMJ(В,Bd3?5,RιKjX$+8/H>Y@8k56f$J10owӂ}$ǔ^\Gr`7A]Enיi-+t誇rFAb 'b\]EZ'gJEW}hqI89ΈUE@[劮z+]Uoo_ f_gYiP6El9zb^|u?=W!It~ݣ޵V%f|^؍'qĪpwE$ֱժL8bj>ş|n$`7ٞy4ԗe55frZ~Z6w)76xS{ĵί9lӹ ߄(ŻyyӞ7[kջ#}C$?=-{wX TW./zw \]-_?^_%V3|yGi}f̫؛\gVQ&+r:mQVJο-m;m<~ﰭhp^U+:V! S/w?n;y5hժ|+.?@/^pHTw6OHu11|Z`PA_Բ'7^ZNrA0jCڷᣫ=[[M1z\sx6On57wkU[M_l:&Fߚهح\.'ϛOgϱ3^g+[.{6Cuؿ?^}뗣8bEy:G_/o's_y7[,zj}=Tc_gZ9&Mjru\5m/JRoCoUUخ"S}'rC=mT6H{H=jC[9$]/ˀXvHk #T+2~dqcVޛt|~VKylժeG挳zu=m{z#S VJhXUd5Z%úbsf']Zzˆ'\S.D`y滳#xPam9XdC57/6|ylK#`z&7;y5߉@]>fqi]B )AP=sT.shFT0IC;V& 2FeB2gN-lwK*7 ?65sicZ{]dg,q|XgRǫ)gWlx㋳xuG~B4hgio'1\gN}wW2Zwy'e9Č!RI tm!"K5֍yk0VlCˆw2̓n-jo-C5G`&a\5zr][o㸒+<,a{X`/8;!Rm _Rc"4EYr4$#XŪ.,Z_(yœiFͤ0R YZMUJ܀>T9gZ;JNŒMh~t0pwJ؋ G5a!:0ۋң^Q*/Z8bvK /ν8^^ec.߻񡪲{^$vfc6l30Iu5%Z @,10R`93GF:sSdp4_JJ׫~96.?z-o?|'[AS6(>P0bWeRr֏|?Vw48 $i&g1P^ׯW6CnDe]&JGѓ@κbXR@rȲFVw%H@Jb\QHo|}鵗v ؋ʁ󉕍;(&] zw) h]ƬMQ Q ztPy cK w+J8`~`Vo!Kꮘzm,ܱ=ֵf|ak>,-Y~~Lᵽ'z}v;a*X k@ 0<ϒSp'Ѽ9.)xtQVb}jъKɲ2NT(0r扁bo9lY\hcE8i-v/.{P*i5QE4w0> j:qH S/8E}tF'l}.RGeDWq=,Ne h8sH:D.:m/܋..do8sT&%ap7·V^L?#o;ZaV(^?xt G,#*󄣘 66Gxv$ 7KLmԾ>"K!@/1m -ftVRXnS%Jqyɟ4ϦQvJ&rӧd%E[)<>-,SL\9N^Ō]%Qɶŧz?Cf)DR@C$wyk;?bDo~tT+[ '8;3Qd= "ӎuBJuvaSw<~tP)rq7Yy^l9- .91Kjo=>_ǿѫtaޙ{Rvu}#_b7@S, gL2ϥdV7V:j6NοE m!Y1/HL(>_W{|=dy؊j.9Дz>. vl̋G!49% t-\_]I򿟮wāOlikh*˗g$[3QV"l>6KB'Hk>IM1\XzTB+$'a2nqE1)V|nd6j>kMƳ`BDb( x!VM 10^W~W4,'A3(L DQLm<mAsU!;Ô˙ &9j@%i)(#ck=f)(ZɌG1@ Cؤ  : xzD4OhuAptow!fͲ0 *=QPH̾ ($*ŷ= TX'6!qvVYA̖cۯ#Y;4yУ0UecKy嘭R0>X\a XQ I hƖ\Jَ|G*[ +&U.?f,6b Qqj=!6ƄS^D,ShaFF-.D?A"VPoJHeɪ*|WY(3_>s %~gݺ60 6MѱPui|1.Uij=}O02+`. )RSx` cZ@T5#=@b W[ޒ{\Șf3΍Z$fM) VbÃgLV^6؉\ L`q`$l6hά֟O00 k̝JYaJV$M!@DbqR7"y/*Ec}1<^ǖyEƭp4Y]b_3Pmb9zjdq̘Ji&6߫ys#I9f^CxK9_fz3$h֜Gwls~BD"pTN_,P5QlcVF_ 6(Jbɱ`@SJ q1~,U6c 2i?_KBP$qg[iʨ|M*f$t>LkK.^4帖xf:4_Dd!hH@'2M T WQ{vp!D!2 '˛4sX@]MMf2T- ICHђC`[r]<]hX{-QWA1 F۩N>N)jR02QAQ8%lO'qteNΪ\,"Zl["iLn-& w*g#Ox "0gk7ߏ '&G)X#]jCeg1"Ƒ#J?ʖ\sFl%e+8$9TpD4!rچ6ZpKwx^SBT5\Od|(o[%yȌ[M]T4PO`6*%W5U1iV<ĬN62[[6L`3fgߟ|>*~|~}-Hao-- }\=RGW~~!dhsvXY2e` s@ $st8R~?|^_./le m-{;- $/MǦ7ZŴ"#qT`jqoWU2a&t9e=YעȕyHa{ 4)b!cteUdcdN"\mwFl)Ќ$Rn9P0 K׌z. \8Ȥ6o\*։̥8-fcڸBDEXBKY* h>YaXuka1'`zF-c av,2;*9-Gđ^ _(<+h?ػ*2z2aЂ@S( ّ[i~*s89$rC:?]f/.fwgHgޗ`t{4=|23%2ƺPyjvW)o#)"8ZһO SfK= !:]9ԩ]Tg,X]g攐I=Rltkx$?0[IϨQO}+:jƸ\oR:̇ly2ጔMǣlUX<7/L~)آe۫wP8~{WtX;|,N7ͩi#չzmgӛQK."rePBH{;oy$uK̓3}EĊQ.RRHx4?tvaa=:}8$ MVG9iR(E]pXVNOAң~r ltpYxNgu?w?kw㝶gNy3cnB\]6>`mta u cޅ B>Od^Pv4ԅ YJf~",-c==}OB1A yw1㍩?4||qNͥusʥ^v]×?U_|Q05]'hM>N{ɬ6('[:`b쟌}}o_\}XLYZŧV /HS牢W̘cq ML1"?okmG%pB,^ao ؝E@$$Ҷs-9NؖM404tu㸃A.`l~F`>n[X<[yk< h-d'tRXKnus%mepzun3|0{t$8ڢnd*o’y|ح+~@@&%(6tʅU D`/G9LG*%FbN̅845pLQHFw7o̾?FT]!u13겢J.yq 0 N͢X'-Ŝ=ޭ,٨Zx R!_qrqiUXxlR 3@)S@T % NPM>LUg҂䣑ۿuB=r3Cնy4b5gˮK/Xc sQL%`[ڰybSzc(@nE\Ξ¾CĜA2ٜ^%gܦb\5"ow) o0ss^à:& D5fk;pKT]HyD?]JZ`14&( чQc*f>5KSo,/fХ^<)MYOoMnQYd_t4x;)nN-,AK $D +%L @0AD凙Icyd1(+㇦/@֋JH~ Maw-<,]<β4:růb8u) c ɤ=2"jo|U|A|1=HVи*-j?Ypo\ (A<-8dn-uWPJgsϑKdL԰/jxq1+ʎSkP1ha JzFJnZ˦E) Q$⾒BՍ%LL(;]T<Als.f URsM l~Ie3! 4Å{ѥ.am1lcu)<9~NOAuwo]vX#o֨ *qq r6Zo~E&()y^mP=N_J5ViQ;m_[Y:aާ3$b_x߈\1KeWG42?9Y= '>ElTk uSA'#' >SBΝyxpLPAg2 (KV]JRt E%ܒ PܤrJݽ|MQԦ%F遻t2[!VjjrAG" 29*h &Ei=_-j!sb].z E F|<oSO1NfnX:hڕa뱀 fu((ڐmDcJޘ|J iyZ$Y)Rr76jJKZ+-iv g֗kԜh _ti6 c й&3ɕ T} =1(`*ZV qi?rU@0e2V, z)޷t,tӑ|Tи ,E_(&#IU붘9B)YLq $ch'X8ΈU9X|3#1^< 0<2;-,;-pnHw/LXp't#-@gSG#Ĺ۽zżSӶeeB=%Дk 4$sNіp̧UcV?g:]p<j %<-s>SDؔb)q2,^ -MՆGtGF&+ܗ[P-ۧ(Dr)?z0'мyDE cMJn rMX˴Ndk ĦU4/FE#qvӎcQYƳm^d+ڰF?j4m#J~{Jm *mیMj;UT/+UxQ2(dh5+HB5db[ӒT'LJ`R?AeOe#`] 8fr}iR QpϤqAuMW~Fy+ V \4_ ?w!{P&;tkc%R *X l6ul"S9LB&Rc=PϬ:' AC,ҐvANzDLB_7LW$@e:"؁{'iohDJ[O ֏] 5 3SkP#ڃB}m1vCOQ@څl'.=4Z+xO썅/c52ϨN0ފzVAPXp_:*hr|S+ NuB趸2LnAN 2Map9ܜٰg!W 1pQ9|x6WNa{j#(Ppw9.5FC:>d<[~V{.fC #b: f *w"g oV\F\Q/?˧ oO 9ʶp a}"n:إ hz|17ţg7/ 7WyhW!g&gA@ 4.Yٚef]w D9cfɉdr yǝ5f0%CfOuI:E͛ȱs!/>iR\ !%v$9V,(I;6 llIt^+ި‌3O>V}򃏯 PJ+oCُBVɩOc94?SFůBׇs/rI9vl$g4.2 $4E#79QVMRs Ì1iCMQ`iI 5s!u j8[Kg3ǭXH߽yJgn 4Wu@ɧ\syRsAO+VWجC.+@&mVg fLhh(o =MԳ57'?RΆ6Ӻ16 gKDq?SJ'}N rQs%OAgg2g+T;Ap-4VR߾y+/R/{ܸn4m'7vflI;Ms %Kc@-J"%P$(|^;8/Ѕ/aVKw.Ͷf[h i_-RzJj7%¼E疕FK,Ek~Z8u d`c$Ay`;H0}{#?~^jhU7c}5_{.khqLje/]9vP?SSq:)Z=*bdeH 2q{:A:#ҟ10O:P/ vwm2sK"n}̄ea4 15x"28b3Z#zM|KB]QY̞~,?7r<(K]ZW;ޏBPMQ⾚&i[CuHP_%6%]jϟBY%I-sGSĭ3+Zm'ħ@O1Y ^u`_JЫ{ܸIہ<5O#D=*]CLm ke0_|σz2 kkAd044:2Sʶ7hUO2IDRQy`w2d T ^&j?}/3GEzhe0Ʊ@_^e$b gj 0ϓ֩6F0*WӞᘮD">;pl d%QF* yYZGxHuE+|F OQ J:<܆Wf{tֹxC{+RPX GQ,4*i+GJ \ _n@C ͛U5 G ˷<RXG|K+~c׿Xr˵K;1^ihxP/egV1 y\(XK9@Wˀ& >ᕺglM`T>Iw*P[YWkussw?}jIWMNjg(VZ2&px98P֏— lP~uw{QꖵpP&z{z`Nn}yTD &Z{X\w|L@J྘`ݠ e@{]{EFxj<;~KI3wˠ;EPPU_{Eq>Ǹ~dyʨ;q%C+]T)qbˋ!]̊XH^ג=(ڎĺ ձwO7SܛmܔUH'It|(4^cq_±!-WfYtno;쏅cK~#o!ia 7܍f0'ƶQ;qYn`rh'qG%9 3ӆ88V.7QSBjna=pg=+jRLAW^d&-.~M' EI)~M`|cf=yQ2k4|)re.)t>HA1, a|ogYw|R d6,~ Zp߽R<ן?t`慕"X؏|]v~$׹y>L:>|lK Qty_"J"r'] CGoF"0,Of&*[;="~ B,{=꛲b%Y5S+pkg;=drרfkPև J(iYfb7~X''z"wC3@S7vğ`Oks>{|@?|&he\ev_ƽzii$1~ Xw?]S"- x:A:#kۏ&؁#E2-{j>^[ F=!%!vNR׿+Hpm)geWI|A>IiuW?M,ct6٬vaWŮopr%`VP>aH>z5:Mf`jIl[S7({! 'ePU?˨XI2nM"<}t#8!U[JnnƥT[w 0 <}<-6Yv`W2S#.F\b"&MNn7kvoĘa{nw^C[cKoqanrz=!i|^a(9*pV>/|ѻPX7:Vyg-pQT N Ua϶؈,Pon2ҚN: 8;8d.A0 }AdW+4qZl`{$0{*j{%@.Z10yJΤ$֧jobLJ`oE ͕$^J7-x^ﺇA ǒ}x.0TȐ}S4d4-n+ m!o66X1r>r-V*(l+d_q`%WaPE M NF:U/OH@543 ,$R1G?0V,6QT< pbat }z_.z%{uW,#g&B/n ɐBV$!ڰ,F54p\MG{[K${Wtbޫh vam Ց%E"Vuk9)wkhTo/-dl$zGY MEֵ[U t%/à,ŷ]a&SNe֐2&Q3$[WCcc`qJ!cP6(˲[ hkUJ yhĩ )Lc(!$ɮo ^l:Sb5z.zY'a}}p^HEK})}~~ey1匈mUjs}](!+~ѻ"4kPiBIWi M Ig7㥯{+ Rhd(̞Ki}Y ">Cw~ f;)B-ByHfiBRa13P C$3#x"wS#W4*f8.D¹AOc=hUrX^j-xI(v~<&LAo _Cj*5OH.pv4%\-<:ݳϴ@$1*gA4T֮֐h P4VC\5@P^Hз.FBgc ѽ jh|z%]M_ t0Lic!I*C#nƵ .+/aUVJ+bxчp>?@Naߋ3#&A1Ғ"SXb!!n3Xskim(N>Z@IA-RCgr,D|5oL f=/-D$~ nPО[M'LYgV64A"V; Uȋh {:;uLJz$up- \ж/g EqcP'c= Xn[qSSڝ?<1}F##ИA(3XW kfMP W&YX̅R@\[IPЪFYyxA@c ]?Gh7kC*vvRVM!bIpQzq4.N$[Duk+.,Fyn6"6\nAC. [1bR')* }Lb~>M?WФo?af3@ _]֯BWWkBk3NHޕudBT"=@'ӓ]D)$-S$%]YWw ؔ{nWg7m>o-Bڑ!X})?ϷxuW_q7Y5+q p[v߼s Y%12)%Nsb͔VSˌ>tdslY.F`s'ɴA+N*BMt"[QC5t|E^,bYB$2fZֈ7;vߏ7ޛ-M#P19:{lu%u ;֩4=8=O@?%zgrm LUwNpR?-:QK^|wz^x$Wy{rRvvS\q* 0k컝E$>z^K,7 *Q/9LEAwܻŞբ -4|:[kAzw&oEC0>I{]mQV )VD-b'jC+.9&*DI\U(LObUaFU =*XŜf9HĎKVyϭ s=Q vƴw4*10ǘmEs 6.N=̙ Ka(%NTθǜ0"QafC蝆OQz 9Y"^6PO)HnVh奶NQ%O=$*9|j^Mޞ2D߽5K ᭦j1FU vm6w䵛,QpcQgI= !hBKCCb)ӾK~x/dy62EfxD[SH=LXX?p,8O%UqRsVOΡͣ=Z3F;= &Y;h<&@zJJڠ뭁kq-pE=82Q4nD;eJ M"'U$.^ļ[@JSݩ;Nȓj#z땔py&-l P{wPKlPJZm3U:St\}y}Sˡ1iyO- #.ՑZxeR0A}e݃o͍uq+qW}w 9{(‰`6N{t)>YUUY;$r0#*;ADmyZյJ!WmwC;@kW'e~9Z5 ń By!L ü7OvP<1^\Ϳ0q6/F_})LdIYtK9[5v8/>nk^(k{f%&*e 6Y EOG@/cuhkӪ_V F3y1h;hLؐVJ+*R# ΐQ!VQ(A]2wMq92]/TNO>e?odq|l рN\(6hKhh\eXR`u|8e*S8k|>J)XŠaW ūSWteragxbxT\YMLWU?f޽ߧb}g .xh㝏Gv)۶)F0\Rlf:cz1x}K_YCGɢ*4F){ڈHZՕvV( d6^f!ˢɶ㙝n+dEvg鞃m :m?XtORѿ'YWRjg# stS^ZV6}s GasESgg,f؜p 8+lvbvEsv6tn٩kqi/2T x8~?Y\Y;X52Zzz4(BjVVaoDݴRdPާM┋z Xӏv:h$ãtdY<cž.Ԃ)=#)bX =\ ,7UYk|끔Jm)um<'ɣ&+ĄHU#m$c8Tۧ9_LP{Zw(n3={RnǀZvzWC8.DJV'dVk"rk\d: Qˡ[:G944ίߏ'TޠZI8Os@1 &?PݸI/\׻7<6hVA]PϽmkat^4Q:Nds?Ep6,Ւ}e6. D5̣L2DUN IPBFuT:*Yitww_ͨL_4h-5C'eFk r{~fޮ^\<~8ޗ$k"'-s_hV}#Ĭ2^$J+i_b!B4ey WbMt`iׁ%eD|4^- y͙Efoh%;V*Vz7ea6ANY/sד'|٘U^Pm14,ث񤏔fCY-h\Nd(LUl~LsKA~U:]vjɓeQlOCxs?kx( ru'䘖kf)=6TDpY3R ٽQAcI= 6ߗqcԶJ eYl-񧩡RWԩT*D9 Q5 %ש9]u~zF䵑u ͻ $}"DW͌a$ }JE>* BZ [c}=clT_e%EF^:F8틅JUS(JUV zRs$p_EbeC!!gHX PFr֡nfj~/6Rî(c,"t*IVZ>/N\~<˜]-nNm^+|狂H-3w\MQr,B`:/'ۯiJ7*q̭XS0^i `0YѶ5\ڔTAQǾȶ~~W%uN_^ms$)(]v) ĹϹQJpc4ӏSKnHQJSr%֕l s^*Y`1kˠ$ted<#>sIORwdkӕZeVnlHI|:kP̌+{3)ǵĶ~:1>^ΨݹSя@D]͌1gK&06o۸+":*y{MQh ߗ;i)+t!D XSڷ^ o1b /-;t7~R 8ejo3*흓N }3EW)lu(Ͼws8O3D.>Vg -顴I,R]J< 筣1 I+L}-Mxü݀CԎP)`6>v2xf FRMqyj_.Bɾc_CP$f./4%܀>PQ/+0iaX6Zr '&teYg+~6U* !Φ-PҀqW[H TJqt*[,x%<D k5s7e`SBO$;źU%MoєRf`t6&% "W2*Ņn!5{q35XaFIL {r3Ey*I97uug ߹+}p nWo"OvOG#i/_1-p^4v#rS<XPEqi$'DsTNW%Xz!Y`+.:,#QTޛZ Us!_j-̂cFx!4FɁG[Bǣws;8ůXQ\P%;8&`h!t\VoB CRځC_O?]MZ^Ko]qs T s.lw-Zxxڝgkzwr:xaڡtbĎҠ:A(㼷x$lh-g=+VkNq M&Y'Zqݛ}B qڼ5T nU %4|:{fQ\ Հ`E?k+7з{m>*4L0~٠g,+z=YaKZ-%"9"x~/3/w^/X*\\VA8SCO-O?N|C_/xLg GX19e %ϙz@ H6YIو2e?̞?.'GB<6VwJlOJ}_j{J~;fZd?K= ꬶWJTc>pb>\2硧 t- &^ECfŶ`_7!B5G *ʔèFbK91_LSKJI9 bo4f缂+ˉoʒz⃏w݉Яޜn/HfrbDZ͔ QB-@O Vo],Z*ܭעX-l[c118ĆȠo~}a昀Q?.ozC93~t'kn77~1G{J\]XDv[ND$Ek]E`Ȼ\!u"Ŏ/.$м쪩92Žec{Yh?; lz*7k4>ACY1) -)7YkVꄐ `͖$kRRS#ɧ+,W< |њj,(g]B-{QK b5*I-n*>PT4BuV[T x[77ysoQ϶ mx7eyn>]A*\^몦1& \sҔxOF6!ljxH!h* q,E$P;w̻εa|ŋzqӵDA^ /vz~6l([FYfR/4[Z%2Y6`EOSֵ4~*k&CzYH2&[].t%89?qRjUOiNhZK¹Ɠ\ }e VY >km=u1Hre%i`rYc)3EzE~[ㅩ>NBn"D>|l;;M=-/W4'D} mm$6ͼѼ\̛of̺ˡ,D^WzH#OjOp4~Z=3h'wY$tHO$~}TilJ`boXhqctFYYIc/T"Mܨ4 [1,;o=kt6 "rf'H1)-7dKS2ި`9eDblcFQL-XBα% &צ=cX/fVcr1 N}_:@a#FL! "BSZ)pzL嘃C1k*kii5E%xGlM`X$RJRxk=hTADeV ?VRm;#o2p/yȂ7{&oCT5l+FB5]B* TZ5TCЉWk7ɐv$n|gisse.<\yecsDdٲ-Q*i'n`˼hCr54=B|L1&A%AW(Y`Kƹ FRLUr JC4ﭷZdR clp47ecb&'`I}y=skڼu { ")̳l²U*YoF16:! nz281X!ؖ6en4,s=ީ-d(6kAyDH,y> S.Rߑ5\_Nu B`GKU֯쳟L6Il PJ1bbINsC(T^q- YRfsXX7oa+D*Ǡ`hb.f[cjl!wK35yVIտyvqf,m[;ҹ)OV4eJ$+>F}E5Xd.ijέ)e/(ogXc6N\t#QRI Ȓ\IJ8Jf8s-HY{\+ %'z8[a6 ;nm.ƻ] /cۭFX|\c ?fe%Y'1^S_7\V!{{wez i7&wI~~+ 5Q>YHF}!FȊXxWK;?^EʯT~Rcץ/~rK._}^XOo43^W/8poWN5?z%x㬛/v'曓ӳK1CP9N˯'<_~a:=]7wVlMױ1 `F`0 6;XǍѐ j cv ȏcա:)Dew"2:jzGGg-D y5IGɳщƤqk/D9hhAa*CB6݋лM)r[WQ:U-LE9K%Ɩ\k_P mZq ܮʫ S D@Ż =4ttRgj@5X 5Ԓ@5.AL$oފ[or?n/y15 $R)*Bw_$) X헾Z :P*{cEsEy4NB4 Rg@2E[!>Jގ9q2)${o8E|aYi n?;\xpdxKAd)F  =Es\?=r>rE{vW{fĆ:;#F?pҌf2IY|t5mέbVBr2;ה*MhwfY~d;po+5<}ƹ.9u88zઉIa:侒Tdvj9; }Ub̷ :Ss:9lΚ[K,6$ UĦ7JMKgsnҊs:oo^%(̍[;q೛T1]}ϫoFyroJc}r=iȾu'qeHI0Gܢ3oQJRh] %b%cL)ɀ5ʆ5X +GANmdHulWp2+&ɚpǛ)j_zL;/~g{sӂU*1jOaRp/IPͨ}LҘiYMNm8|KZ~矁'h?Ć)薔z0[6]v;)l{^!U)Ȋ~1+`M;J\sH}FRg1;1opaKCϝzqo^~=)T$|hŐ-Q1;Q' K5@$")2;ƵDԗ%LjnT͑sSfs'i\T gcNT4$mg}6=,tO9173)2ICAi L5QUljubb.#nQb߲Efgs0k!~ֈF$5")H:l*E}ٹǂeP^bA@?FIљw8uFqTFB횂Zܖ۷\ 4W^M*lܛD79)fqNMsfvj*XNĄP{eE `oⲓ pCnZaS&s:rI$rdK"פٙ=Y(&sI9ht^+At'?ү|;`xUĭL,TVC>t-28b)v9t.:Mdj+:ݜMpjLj±lй Q -~OW7;Z`9ЕC/%!" c}3܉|%/~d&h@z'd̺ ?wE*]{U܍EWuGtaǎ3ߛ :cwnGg؉!,  ʊeo .eZLWBNZCL2lk {jTzL7-iړ¥#NGåpzN6@MG)cG[[A[Q[¥圽̍5^k/fkX!(\i#8r蛧t\D)X(SC4i msѺY-VJI6ug,\ԙl__׸7BR#XA󽫮Sf%哚vmVhNSLŝ{QxXqWA+ _P=>nwZsR}us+M>]<@K;߶w6m-)q{?lhv9s{)tTkr/, _}/:ƳĹ㸻 zi􍾺 41L 6 iRؘӪu(qɽ8멺I)PAgxGs>^k$tE#Gׄ =ʼn:[ۀ6b\qȠ,|۟_y~` jHb(i%_b?i %ոU J=jeƒ5")f͙ZEM&:'=nfroPF{ZBQk9 Nܘ\f!3Z+4M_+"Fmn| ڪ;Hj'JyebBSĭ;o;\M'~[KlyՓ1334OC&W0鱞NJl9=Mv g̋?}VQGDs{(tz~ѝTg+M!&~m-W[rg2i@VۀXqF+zvX+MdDA)Z-xЍ4(۫Vg O@YGdUgpAdl!N%.GrZvܕ-i^]gkfZ$s{eduPߗĔT77WWab=C]?g&o˻qzݕT3cW9&tz̜Il r+gֶ/dv.&fjSj)w^ڗmjԠoR;3O_?;9|>_< aۆ} _d׻~b|e9l'+vLo/") l&[^->6`xn8uaQU W VrI( iO%c?)&ќĀe\%ΉӱSn@E$gr  k*6*|"e:^{ulV98!veH1-Mr6\Uוc;KoƫR"T#oq ':۵v9b!3{=#Ucr9ba߈9sz?ozwViG݁Ooj`PW06D>ZQoꊒ!u,]Ks9+>lu;ljL nג=}EQb*H[* D}m8c]5NJQijhXOjmIbSjQ$'z Ń'՜}B(cBE_'RԄS[7djzR`lE.^N> uG 52nNI+ 0HDN[ǧ{;5-l-X[r&aG=q6Kߊ{B MlZ1~1Z>_[x~e0v.MqG5p7>  iA[l9usNݐ|#}|KWo38i{jə4BiahE0Z|O "3?E/ #nGׇ Mߝ jUe 84شI'bvh<rm  0j7=_<:Zζ[Zr63 䙱:;쁐D4 5ZV*nwI H?=AK']G//_-黛xmWUN>-%f= utmg/}x6XuNٷ'Eļu_,Ab|/XmwL,C4]׿n~.2yˍ]^m)w"\_^-W'm.4O}׹k1G/!>*t7zR?{J;t/s/UIo3z  >ͧ{si#gZӂY!;Z+=@ 4IF&F+B70߹1UWW1& i%m'ӂ ~l~rA\}_샞l;G vcTX,MM* 39-5f}T^HuV?iOL} Avzdf1h0(ir- hgh7C&L3Ρ5A;;6vRcgh } 7H8 6(Pnnvڽ&dxv/%i_SGPEʼ`%=3%! G盷wb"'~Ls{L\>t w].v)$53mԧ 4OO٘{=.WKT^W(1ZZQ7s$j1's-ո,E) 2譬хb/9DP#;LtYS =X_=US: Q\1MʬdYɦ&악+A1^p բddkt9i +S Ln5ޘh;||Ui7 8-V Mj+}I!K5Lj;d"D Uڔ L5(O#Zl ƢPVPnEH꓁*DQTԠJ5! {hMʹ (AQATPMի2Cj~wѺJB6 ..oč!j)N ID ^%9 V0@ )𑨴 ֜ IsN@:/bLmRoGŢH\d;"(&%Yf\tgP&$vPB+G*TYȨv!*s&*Frņid#i#8e=[pg%#PGANDK;]NU5QG(Q9rOƔ$bU0(ɚVݫ9Z!o##2!h|HK&n55iM`),"ƮPJ2AB)uîO7QIZx(V51Hx4NN-yg],)"V)GfOz}ިUP&1TS\bCUs?sD7C%(UkBt=g P1@w6 Ul )PAP/tm̤v`*ʫ}GW?"ZN%b;}YQqsHg9[Pet9Ue(v =i$)FkkIzAQjO!, };xvr9^~51B +"`^EgH@];v2ublV 3B5U0&cUFvɱX*RQ[uJђ$FoƩ(>ag6#rSL7z7TqOM&v:?j\VBrFY7k-Pi-A,4=KSpWuAjU#&Ĉֹ$65\3vG U *ľoG8w~ OWDIPUHoEUiRk wdǬ` l6g^ ` fHE4 zkux8BLW%n&_lYaYzD폛Z1P dSm%{)\puazrzJM{T%`Қ"Uh>+%^+0DkRq|bgicl T*:_b] LQ 7FE*5) xm:T=bBR{sb3 N3f@}^^^X_?I=oZY~cx{l.ˋ.~M^<]t^ZSoؚӁ&`Q9}ﲘ U2?eۅ[t isdjW<7Cԇ34} 7wUhڒ3iQ/k=!(:Uqnb'8'W*QRy-Ȣ(a&ɷ9'PZ6j%PmĠcI̜@9'P}lq55NiMo5Ztc~CĤv fochEgmNa.rƤx?|\/jy셩1z;acZY7|n[-iaGՒ3׎P:p:YV>{ߞ!)H@g2=ꘁmB/l+FP;PILzPf¾#p Iv./gPcF! iYƁ۹>!^TDmL3F5D< uX2x|y"?ebI/n^O~]-?z$7*#ʇv6z=uYOZcq獠ӟ iG6߹ȁ}U~@z b].w}ZO"gd8 jIxE*FeB8km0<7zU;D9q|k㉽V_e#lpl[3_#fBX ,7(N fl7f3Q6w>&>oʼf883ʞQvm#8/qxK3{,Q~צVnv[䛧9{vz*ʼ <6, fh] 3{!N 히9T1p l+v~| eB;?R..Q<&hcpS扈mZvEߢ'ULl g\g%?_$&v}?{WƍA_{z]w M.0$V-K$IߟJ^i-[XH#iMJ,/ʸ-B-aQ-p&(<%C7MKfrE!w=@csZSM!9s RJJT63,^H7JZLŽZkpCiTqKs١ I-XRi<>^>14 %ݸ4ݸ<PO7.RXr`ːxӅKpY|TqŦnG߷vp󨶸6V#aόP 燳ߐ3MoЛ)sAf/PV[gY+?+ӳiVuWLYa+Z:;2f^-KR6?!~m< 7LFޮK]UB^Ij]dDҢop큶Sm :jBt<롩т~ YcS(&O6 |%yT3> W̨Py0IIFy2WCDHHڶk bVג^vT2vqCiH=,(i$jYW8eZd'=EcKSͬ+| },ByDXb.Yt#<^Zz a7&<{df{TT4ѴNӄ#S`$C#@?P rҢwz և+WZ"9K"2*|o+%Y y xLּ&{`mERjMH*lVFQU!%tHL)9IYνT\)!L?:K7/p9{IjD{Z"\zHڛNG[x6hc:W.b5PV--xobpf \PEx&]>( mle 1$HpZd\pTTa3TT&,ڻw˫HW&#/R[@*dL~ؚZ<_/Ḿ~[1uݎnhĢ:=9[ģ1aXK+ @*3xw@ZJ㗣t'ݣ";T* 55w !;f)7ݑJ>RXC!dk?9L3eP/hz @R~VO VD\ ˅QNVEBGBPGo+\pgTx\,GΔ1eR'EՐE /բFZ1 al-c5$D>&MʂA ")Q7P{f3^ py 7,޺0'cGKte5~F L#eS]Vf*>y~\)+Ѳ@(~aO#rt ܷ.eȏwybݹ͢2O^tlQa -}n#W{xًqw䠕v<{zt~BVrT>ӷz39~ q5s+b݌tx"v~Sw;oȸ\lG h#AUm`̬6 b5o<Od<Үy2O.Q۵4y o_10tJR*b>YK,/2U/K$:N/p0sN+-0ДB2RYv=E N񝓒h9]yUc G?~|?\/ 7Fԝ_6,|;Gaೳt4c'haWoN;|Eǃ#T|eY7,Ny>M+97`Bym 9fOX@yobeȸ0DmgqS@Ǒe+Bqw~;ҳ1xtvOvQ4V RtZSHO 7Ҟa>mG.LzޙHY}*<]s )?>y@V`J/Q /|>+O%bGl Jl0BeLk ɆH6D .ˬ`4y%*X(QREehAIMjӔ0]QDxt&E61 xrk02;)fq}P`:U~(Q*'E%Q,c47*:I]fRLUғX$(=5JO5Jk2X ;cDp1f cЮ06+BNf@#hvVkm_t^}j*pK?'ICU:JSg9EV@ZI%:wBPKGہZIGK-IJI7kʆOdTfiQEHJ5I43*AdqӰdY1$sB< 3T-6p ƾ,;p CE1K<)ob6+_*“;G@H\-3YW>Lzqz1%Hnq{p6#LAFw%#x ^ZФEG m ƥ H 1?׌YM&Sr ax@Dk ĆR؎B UѳڊUJI=W,Byco1*2iYLdVMSGr垕}bE {\%QA- " _TPUkoUcҊtےԈv˒Wl'F&ωKmbl+2r<d7 $2.8b*lr2aX؍W Ǻ0 G-iiZ˕:SNh7r-$ieB&o7f;1RYplERB # 8Pi+6Y: TAE.5T.tK_(ϝC1Hx:|VGlyRh!_Ͽk1M)>JA@nP9xg|.ԙB\X g7(Jrw42FSn4RACr$Ǹ&U$kDM*vddkɁ$"dkЎSJ܎XsbBŸN\IT!0*CEi̹d!G &!_sx_]![w@rgC*=zиו7{㭘l?x\Khpߍ2Rz?('̎SNN>q2CΊE٩5(Ƚp7H":%:QR">uVT>\Z*\ MZn*pwp."hn뵩U]ok_O{Ƿ'azp⾯<}ppf!ZLuN= hcp5BZ388T.4;S d4+A?mOo%cOOR֓ٮD{Qi ;vGmդ]M(6A`qjk33_f-}gzۿlzmwy,X1(lE՛7Ц=>BugQQ}ns 4IK2E- gDGbK*%\¢]v6o֌S7T'BjvH$6b:I%X ADWpQG=Z[Y'*E49A""E#/6ݹf|]7i0-ZU\ w!s`Q]FG ǽJVTSBnS ,/D෨9Xx<.3wY>{lSe޹&IhhELLCRJh+U<\rNIi!0A8`Auoڄ72ONrS\GvhJ| #['%Fr p,y%&::oТ(@"֐֎zIp)?x$ .sR1*9'hZF9%фļquWYFsasBxQ(D9DAIo$(ɈG&U0rƢrQfPYoBaR@JH)1_ @4qpC?Qy&zf.Ͷ瓛OgqIlkMycWF8h2/p:sۢd+{7//>YwTwf*w'Nm=3\2e=]+`?e+t??^^m?焐7w.df(yy涠WN箒\" %VQ>n]w|UϲS]-b?[dQD&U׫S]ܓUO$b H}DjVǰۧ>\ՇN| k}~ ]?:N5HkEc Ն(2#$j 'tW+<2uv?#{` iA Qʳn1z4*"% @IIY{9͗(_O.1LpޓnSpq^|n9U]m1kQ>Փ6c?o+ܲLQtА`yB8ohu;qMlt_/u*It<7y=ܣ=ܣvM:˓"( ܒ BTvr_pR{"KDp$ 6;|ôwZYHEV f B#bmMT̓84P,(^.b[yՠHK}zA]/9%ׅ i:`'l5pdR7c'>ˇYw\ ~\}|'=NtyKr7'^LOg5wA+;'N2S0L3-pXq#s!A9$ %Q,۹F]f 贋#A NrrVl˼f~ߓ婻<`\wkJZ;&qR+r[ xhm؁:L+rco6cAౙ *0=(d(Qû}!ni_;bT_vgMwU_%5k^SJۊ=* ~pI?}~NU@F9eP]}/mYa6V]l)O(v agmGv 0 D{ _.TeuIk~YVҥAN8҃*Ԏ5%9(RTVf AhF)/ڠӔtIEՁD<*۝^ Ug 1m,MH=AaHp^957D`!0f"PGHVs!%P!=â5ր!cQI1Xx{<peܘ>9įi`Vpǝ-l˒%n#HҢHm2ON9A8cΑh{:8), -J $rCp  G4e 5aqT)y$uZKz*ְ:E]R-@T3we4r' Gq)@y7 HUgT߆0NQ /ãOosA+0?&d^ދK*Vprku4gKpadeཛ%Y<}n!Zߞ ~x|[)ػ9S{!L55vXųwnj'Ӆp{yVXN 'fΥLJsǹٚ'inHDZ]}r˧,w0>|gν7*^҂y}:G\ |;j{xu%"O\?1+7+{+,+}UVz]ū{GkOO #ZjZ#fM.ܓecZ}V%6L<`]kX4 tݬ_*?uk]캮|֒JI5 m[mD8c44w=u${(Mq m3'-J]kry\k P n -Lѕiτ:vK%N.|q (1ܣPƎ)Ғ=o#hyZM r9_P# NCyP񁒯M.-smז_[d pc"JJO2 tL10A-g""wީ(鄢Q$pj :P5#J>jܭ𷠅XҜ^) !(C%T@b*Aa,q ^h(t2 ~q;=,[D\6i݈$ecWMpJB؜s&B,7Dk 4jeke :7IA/Ay4'gz1yX*[]ifC2cVp }[Æ+/[;" [ǹ`CX_0qZN_UlP5!$)mEB@}9ો4 jߎʍ.ߨdEcC 9 hg/nn=C֥l9@ .tضIM2iV&1_hS-x$ʎpԓIH6Jq0Y6Muڣm 48bAZðM=jإ@3撨\(Q$4 v;.2btw^I9Z?O!f}|_ηNhrZ_bOlykh$bEP L*lML56Pμ+Lv]+6ޮ:!^n'U:v`c_Ab+h/qhB] p+[p)S@;Wqoeпt$:KNġO{hLgsGAH@'TWvF.pT <&q?*qt|7~;;buh5wo>@u6OEV ŤqmWhuJinN<,{{g3K0=QFِlqWl7C7>nq3d3CCKIhZ5*t439C6#]:mC 6W/ !Rje<:-X NYPJDJfҖV[)/JJS`)8 Dy>7*ormҸFOc(nN?ğ X|NEbGʒl5ZX%(`/¡'gLmԆd" IF"9CQ8#|D*\51D(ORL1 Dj$y 8#5ɏp6p+H,2Pd`T`BrQ2^XXձ i,c5/G룕ZdiI2$3):D lPt/ azi zK{ןడ%u2/,zfC|0+Pk!STd̎< 7>z}&Б#!\qwR8M]o>j@s7А~FGƾGG׬FFL@0$d:`XTڑ"2]"H ڪCy6: Z=!#é. >R9+jp?O2o6YleWKy/|+P^en/ID~l^"mCC?>z]qVv=ڻ& Ι/x Lt"O)E;+I~uDH e5/G,ChcŮ%ɕ|>0wb<\oN_n^]W~U SuO{,|@xu}Z|Vۊ)1匼3 ^S9k:ڌG.ˣw2 9KuN N5Sޅ[WO_֒d:Fc0:2Z&r1xe 9g#ȧLC0ߢ"ӿ/~H'sO퇴c1U7ˁ^ PJB&ņ*2h$p 'DL Sȁ0d)zN&~nFU{F #UIC53BVS w.gw7.|?eZ?mpuI? 3ߴ}?=V>>NNOy{Dt(&!} ό;￿=%0?9fuJ3w w{{=/\ lO|zgnpƕ$EJ1aLЌ@vՠƄUF OOrIh#'zFa$qrR0z\neF(vs׀( Ygl"ntXdwX T\vӾ\N]XR?Oޖ 54E,7>{?Os;7W%X髭-S[(r z#]v_Wwb7WW%=ynzEhSybbHC._Ků{Lp*Ϭ gcb`Cثհ+>)qދ1e_6."O'z%wq.*pH 1; e8S*t\&%%DD3s.rdqQ/i=gا]?̉*P`e)l$c ?$LfI6DtCWо/ނl5נR"V(=ٺrbZΔޥ \6̍2%MYЭ9!]hِ%=Q܏6"I/'oND%#};7X]V&lzQLұ,w*{C/ĐR{NQ1D :G Ędަ{Cs+jb4t.S$l0?xK'3+ŴZlQRp *7n55* dqXRo>.)D d(5:x*ʢ#˪ kdt>fmGAnN d D*F^H>s3sNΨ^ _Zlp39~>y(JR ^H)6,  '*m.Ru?66[yI) Qk h;A LGMU΂IH={z&(X5:!.)]1B>2P3hiL;Kg0Z[} (;@̄!OKZXL[CvpzRj޴ae6h5g(m9:Lkn n QzHUކ dKC,U9SH$|#/i;!K (ـ<ʈ:  do؀ ;%XW=vӘ(Zi5fyA&ə"I-*s;gVu@9v`)P|51y.* u&Wwm7i݁g-T4?y`T.,bڱǪd9^<.>14,D13w%~s ty.ъDv#1<;s!JVjKRR!=K+כZwb hN* * d:HNzK-drLޱHkV`h{Q0es SZ\dYN%vZm#˿җZP}~q6:iq02B$BO'wŒq>E@vwuiz4_fYdw -;efB #Sr 7&<ȥ\+wF0.fN sNH)s q&RCsp9¥/2&sl o( ]5$B4Ziԩ-7H+jP ~K )$dd,Q bKr^V$8dm$/ _7[ݞ*ӓ; /IxQF7)W|vZ/ا)KB1Fס?~>3s3Ye݇.O'<,SxC%<&׬I@y=DGD+,k x-P5ʲ DoC5IIFܭ'E𘦨ūzo;3%G*|8өoI/=0ڱkFZOM&XT ;c2F\:!rd۳_+?\:,0ʼ?Vhl%s$w9݄%xs^g;O૷?~6Y.gWwSQJYm%1,%!YzatҮCnFTbZT@?*!dx4& *>hE?bNR7,>٨(߿&IR #HENZ%F=70o@Xw͐cPxY˜;cXaǤ"Xy_Eꏛ #F4W1f:Z{Cq#v)WcnĐ1jbh,FCƈTPkbZ5jyۨ1fZsR+j˘#(GAXd)$u)\M \L;1WٖQ!Jqe0dg2 K*wMbdt)j@}Oc@SF=&0Jvj@a|qJ5%[OũԀ ) NZ+UH +Hc-,cV{^O.9\׀P&L8uz\ D>fvy0_Jk  U|~ pz8ͱQ~_R7A|'l_A" SJM \2)k ې}C$Bڜ4[MW;b!SDh6^>V_K0s}D%!XBC@FfeYhSM7'W'vw{~sv,offsoeȜo֎^ 2a%/xA_KD2~Cqgr`,ΫܩΟHÜ?o`ͨy9cZwve5gn\Ͳgٵ~MWvk5SN($\(&2%o_=u \7-@@F-:?azB4y;`M3~(z&n1$0WXH&uUrMX(4Mz[$}oikY.y"fUrVMv%%S[3 C-pDaO #]BK@HBITD b0AF ܝYStBQF MRcd6O%"w{&)e,w0۵}1D2 )mAFozh{0/~- k|sŘZ{!;%$;(N4'&fɎ9N0HpIŅ×џd(d,H"WqlZQD5︮dwH@5Kp9XX_P80(^[M0u$!2RGRXGRX`9K]RFJƤL!9eV"Z8V[1ڜ1\)123D8R2ȥ8$c1.+Y"sV%x%U \ 4ީ hJDU%(j~JiP Se(g~X s$R!H( bV[{ul=FЫտϊnCO3Z^&l޴HbrHb sԀiWLuq/Dj\-&kgZ%s4cH1.r S)6Ɓe尞B րXDaɮ kۿHNbH4f`ݓU|دR(ޕ0<7e% LqM6Rc4ӆmIUf0j/婱g9R3:`@-u;OF2x@)R\*|=k}̙IJC 8 v5֬~ǣ%{-Ko^zKlɖg) Zb,p* eRΌ%O3b%ʵ9)lΝE)ψnFPT3s*JKFPi<.b(x##)ИqIfenWe[nogK>{[풗8x1hV\tf/.w̨Ոt<ƻ[KAhXc(T]S~lax,;kW?7Xfk"=K5^owwkN~yݭ??'7udnj JZm|%?;o%>a]*L m R$#$;fE%W>;?n#}5ǽC ݋A)X{5:CKg~q=!֤bG{5岞cU@9QzLWx_cD-%t| WFfA8NǟW 6 zx ۞\i{\{mqI%#H??*H!SݘO,O:kMN"t~J&wgQHנ֐ɐVˬ4R"91qtSZ2DZ2Re36+lE?U`1; 8tgYI})_@0rma +ΰb-\Ȕ(]W-*qq%=ؘ4BeEuP)YKP9mhAo\g!3UIF0,%Sp`+7,k-*Fѭ*}Oc+M-j׏e\u3*X3]B9ʦh5qEuÙOs9竿Cve?g|r7??G糋ڀ}>]|º*r=@.^Fgy>nO;̽oJ ;3c_lG3( c kqO2WvU":/%Y4(|0ztOQV9vAĮQG pxb]kh@C[h ORX-=X--!F6F'ih@C[hOFn] *UG.of\‰Ɯ+s!) ǤfʥB21D`saàԔD0Αf"5 N1VK2BVH:LSRFUĮHwk%8b-8Qk9R@S{;/+68g(j'ɞvS1vAĮQG hMYh@C[hOI2zs_RÀ-!wmm rpeW}J0D^|ɖi )iHΐHVֵE@h4TnC"7֘Pֆ|"'S:P9W0e7xY'c6ث)7}N˳Ofqq3/C_M's=F|H͉WR3ڎOpPȮm]S\gvAa,$ª7֨ĥ$< cXDTs1cXd6QSpĉU}S-,AP,Klccp7YEv6= }<&BRVTfEWY2@aM@bQA;23[#v' NDIp'TK3<6EiZ&ޤ) (Jtm,µgכa%P J`I*t!D)@4s.)NHhB#IYSLuG8U87c`3q3s)MHnCZ7YǗ+)DJ v.WRX\ILQ31/KtA.c,fv% ZIV4W_jp+ 9ZoВUo*!׷MǨ0 =7?0!!߹6)P٣?,h7N(O DtQE xLM5&4T!!߹6)/[ޓ`@֔!t*mȋ".rn nmHw.{2e $ P{ԛֽԋRN*bAKÜQN*1׽qCxecz9-$*3&`%uݯ ڄ"תt.˽)ղ3ht4g*k( 8["ϐhošUbPϺ}컵?fi=ݪ1/pS#۝Ǽ4Roh⟿yg:rzFW'ɚjɴh ҽ^LVy']v ,̕)U(sx8/cxkJ]p΁dl2H9\لsLV\U"` x8!))&10b(E.ĻdT C%SARK3lF15ԧ&pT+y!_Y1X6R[7Qc.TcT<HNĸA#MY>5֨ٹLN)=u8ys+(7$$wh[sg7*a&FfeFŗpX|7}Fu)5~ὃMh^fQnZQݿ4C=Pz&3~Zb aMI$@; ZD!UW+µ^%',u~i$6mNoͰyâj !PUI(DUUMzTTmB1mNfo05H;:hȳH;DȜdgyd(g<=v.9i{>15oW 7R* CSybwr.\z ΋O^W%LgQ7_|2cT:峋cb69FY%TYwvj7tʏ|^RB6|!OOO?U (JS،z[Lӯ|1qyn3,Mbґq:R &N1X\ǨKHKuZ hHr<F!i!e|24P '!(SPVO ~:._1<JE^c3X;bO.aDeF1v3ա:y1a65Gٮfr|Qɠ*)Gu>yFt;BӢ1DHU0 }]ܥdU" $!k9V=Ƹa~@lPBU_i+ٴiT(Uzݜ+XUc){wL`RH12 0M뼫6 UPa mڂ T 5'b_T@{dacNB[+;i.9:"1 ^7 Ej2k*ՓДhB:N_frAU 7dzG5 l+q8TG,U"Ig*$݁ƆbkDMWu=ɴ),LbB)X'8%֧j9e\Csb(>?9n:ln*Jx ZS37t(ՆI"IB3(x !"qD%$&\ƜkP6%}y`fOQ'RjbKM0A:JSD1& &&9x'AA+ug^y'?7N~y'qƝM֬mIޘ9 3}$Cߐ7?~k̺x ۖ魃n6(&~{}6Ij Żu%y泶Fٟ B?ܘmHwwf[n~EsP `PXs*f=|6+mxl,3F>H'ؖ3e\ , $^ Ǖq/ffxK ]¹?(1TKK\jrǠk~~/x5e j-l>T̗>Sz&j^][Qj5uq$ï37>q257Ov[dY]=ϻu>AYųe&\Gni!ˎA(qFNY1M] Q8tZ.y;_r}df sM`3gz_hG~qD`8h/[8Bn3$AD, vyHQo١% UR,3 8qR 8+Sc#snBC@=YZqE/ `8,J}n"W>;VZ 1N:(`|Ls&"R^S"i1%)Ui,|SE11!@))3ӂ&iZrBSo+P !Hm9*[s;.ao[r(%,٘8M-h)f o]kL]$hC&A+J2$[@u۝fZ4SzLE(.(A21MlKQNR#G2b5W:<ڐ[MԘ)Dre|jak@&h~I\3{FRr}8lLa-G֜z\vܚE$ ^Γ0'a5*zs?^}̫x֫Q,cI?yS9ҶEy8mf&#'xYd;fBu]X_b؆nN{= o5ux_8HU1b͡fcAl.nnvAb#zh $=Zl}nAK}>@i/jhI-CQ A-va[aw0A Թ`q@Xcڠ a}Zr67EJmIU4H\1:F/яam"Eֿ ݴ}O(ފK[ֿ> FtӺuY4uʒR &-sM|wYS¾"Ӓ~YmtF6Ma(y9pZ  ͚$*Ho*=TմJeDBQ k ͅå* !x4른 jETȠ: UbP!arH$Y9m<?=)7f ;l4DK'A.T*uVr8u`7@a}E +,v~^;W(-}3|ђ1?(YBV6픞#$4NY#e%.(L[m~Ap9X}z#/Q˰h[GՋީv;=j YPɮT+Ysp8X%UL, Dna^J3ԋDXv z[^Ew;=Z|F ;=\$aT˃EW`Qx`\waϏ8M =Hp ׌Ӗ넇ŨCI*eb?CsT_Hw}cbG*b`T ^@Jp"Ӕq s_p9T6)!ϓqCJ"XLm |)ŗ(2a4Tc`0Mi3 ZX 1!My[dSİtqv\ϯnٯW>_ԟ[4o,\m=I֛+_w}I28{ ygˬdoS8 j? og`m]@77`ZGٟ B?ܘm̶Lu}wBx͇/On/ H3AE6{.%MDh62">Tr6x>EJ U LbE잍0gTF1Xw[X`]raTLn2 Xc~S<4#\sfZ`5;OI;zsR,b?ϏNI"q`k+S1Va!?L6^{8GWq^R$ qFZ-glXũC`,bΉ aنw?0`sF׽ V>Ip^X^'rn%,2%11Dc?t{k~P5e}r~XɣKISD] 5 @ְ[F7{ϧiYneW.WQUv}!F#W@딖 T-#ԸTЬZX ~tgKЬ*nvZř2HJ;S,𤢳q݅jp0 jg5j $FBJfo{Mx4MF2Ƽ`]-2vٱ;Ԩ1r$s%g+5f }GdNC|xA9jTXwV34 [+knŗ U!%)I]Q$ 3bIE(bi0eK$>;}ӥP^\(V}(|W!Pmh濟C U}f-'qּڙYێ7CcϺSpO8P^Zs #OI2_UR^9%g s>V`tH{L较B]4iMPm## q՝[<\q+aQ_k*ں}P#xZ D)6/xHT{>G*#EMrqh5=y6U|e u43c #C s Az:rZ$V5Rg*j`k*+Z䖄?U~=D<VYsKZ喤mV5?ˏd&(-jYyZTV[7#t6U>RԚv:>ָAxdo׫@rcx<}sAkþy1l .פa'0`o5NV4z#>do\k wv?>{DyL?Q%%?P[$N>55mqdxJ_t|Z5v)weF!(VdhCB< F14TT~W>WyqaEQshtR=Dw<6HA,&RPxad0d `Xt8:4evFt{Ǝ '%:ǩ `lst^UJN CFŵ+nyqt ܇Z;gp֑V*ӠX?yٝ ׅdž.Ƈ4YR=Mu⽥r:k#}Qf W Z O.twxf_RNvXwީR~Lߩ,yZT;NGѶl۩XUJasoT7PS7zu>NqeXv:';TkJJ}I9nSq7Cș ) L?N)q[?,Rgҩ5N? 4Y -&bOSJ3jӶgiQwԔϵºq.:Xi:Ӕ'3Nig ?|źVG{cђ)h%`B#P-ck,B((&a1sN( -fVh~5%j-v4 `ݫ2v\ő {_^.1ϟ,\};kprhֿ13W{TmjSGѶ܀$:& (fjx+M$,҄4D:QV2a$f@a!/i$ѵ| 1rjѯ?jw_G}=jezWu] N)+gy GcLLz6 o2 3b3 qR֘ h~:S7dAwd#w];K? 3H%-r]SV) |^Q,O׳0p"U7-I&=(9sP2#!_֗)^6 \ꝯ 7n l|=O7?? i!1bYj'SGWc:x &ӧgG =Z_rp!GRݵϗOBjڟ湯׭ r[s1໕Άpn_7g3.P)=g1_b /s|b;+߹X.+/&j2T+XI')eH,11Qhp&b# 8d:jnS0.=9ӑ&A]K'!ed7~tr`r!ͭG=rI=Bu#DtȔ剢* E()Y\Z%MQ6Q%95{Tcht$ m**,pF6 ,_ _els4`^|[ dHp7=Բ/\|OCjF.+&r1|.E_m.)p ozIR/!7 W҇\w?CT |xCD+`|WЄ`Bv1&x Hwdd{&Kx| Ɩ ks}b7ǂgg/O);ٴHu8hQ_821CåsaLGP<$tŪ4~ vc$:jNNuFn{a 5;P 91wnSQl `Km"`'rEbE&B,CmubdDcgL+Kڐ8SW3۶>H0^3++x6+< rK֍*vqQm52F@QKzT)ʬm*Ke#RiK}H.d*vkE5Bޏz['3:)|wTɇDAsV%u EL}IUܗ˓I`DiUM%SUiZ\M#w1U||5w,'\rݨgќ;?J;ifGvaV=N_45k!x ~4#tjA| Ze>ZftM&hBWsx˯o(W ^K] cZfq'h/.#8 mUYT;ߐ_u_:]9: J:W+Ua b5̑!"Z)6 ti99s6i% A]T~kL`Lp{1\h%.;,1Pi2~)FWjC/I22 K GSp~A ryZAZq;9dŁ a*ؕ݃}ȺU?޾ F@h})Dx4o Ooh8P1~D9+i SJ}ǜ#?85IӃo@&ue#PW;Y|vz?OٚIba ω&{;u`M'?F\ Ugi\c&WRtE)yh{4 s;ΫTKE<9zp'X' ?fMzI mLԤQLdړ"pPCnjǤw7^LPIy&@0҂ 3}a_fQM| /{_ C̅IL`R%[szdϒZx_t|ZyY'4Cc 6AbH =$q -,s`¸]9C?M\)HwK6Z .(BH"j4DaLw,j-ahryQ!~B*L@_.Nj_Q.\NE2kJľ SGbXqY96)~It>~L;w.{7p3}[H}XaUՏwqÐ-sʣb37VSҤS | IBvJxҶ:>|#x9%:p_SVcT? "“=o>Y*FgXNޞ=wN&p&N~^3Ͻ E_@trbF)Q^+NĞ:ECM-lS=Fvj+H)V(UXnf,݆/\g `2}OvT./ QOrp!GnAnw7 ^ kr_#/&$1+r= 0ݝ>ss<;QHq$#cWp4;¯ );+t4F(Fa,0gBUYIQA" K$cEAQʘWGm"q'ytl"Xy:),USTb|͡,!3FWL и|Qr1JoN,')'41_`1тjVU]@DpHS&Q-ȳCR/]M-f4Myhu1ڐ-lFG`a2+̄ygULy6 !`jA2N@3:5CnEIy2+W ]:n(ؘe'лIܻ֙<)a߽,$ 6ZRS"oItOScStuo9ӹZ|' A ht fܣ,=.lׂ1yiFRU]HW.h[ɎwV\8qѶ $V^mS4>Jη |i"Mdь~.FiHl(K7{ǸICn-Id;HYR>t;IC>9"x -Kay) >9uIlX$̭HJ|]m}Yz!odO_=f%Y*dD*ʈt!z_]Q>kY @۝HR nLnxy̼ZWSuO'`'_ox]O,cZ x_s&NF ab_KsW pNھ z<%Pz/?g暴nƢW Z ={CMł(:k%t"l #^'aPYVjRKu1I8 <hʕP&vAZK]h)zXީNمK H*Kc4O"d&Nj 4li¾.u,ج9sz^aA uI%e"ނQ2˘HQvX.'چT|G"\>-(_۝y:&}^n:ſuPяS/z{<(bt6D'wc;sߧrѻ⯷>*]˟f]o'z;163/ï|#|;d}t7^gv犞}^y/&_M齻6^ƿ]sbۙGqO24;{/n@K$xwRr,D2+̀T!D'0*/(^8cyA(Hi5my95/f?~1?_OYq05^?J|ʻB%5 "Y]4ݙeaa'Ug^|v |}>A(E w?PԺ@gux%>ȪêT .ʎQmP'%1! Nk?09,9@>낳gN y5Zѻ `$ex{w*>:I`KRt ߼~O;hM' ~!򌘢[q}(w`W޾nMr,YezWӟd&JnNCf3]([iu H?^7ty9YY7t]t{)?w{S&)EcЪ"x[Q䃺{SH 3G{D1u|s>9tNLUKDf[=6\9Lt [0,$Xs ~6iP&vF0 )X#%ztPfk~n-Ug.*37"bcvgrÉz`Ya-QRA_R1{% $J}C3NHJJ \?B+ĩf}ΛuG:cr6^3^R*EN[vF26T"niGU~>nuR`j70w蜰5N C֙o]sgֶܴNjJ6/(Ffl≃ hʣHs@NLÛz0z{v;>A1{wj+>uXniU!ڸ ՒVcLJ*d >o#J@Rv@[,N;Q%0q|&V"O;hvr.4jB[W;0Eux-Q5n;ՊQvmаTk]\oanzڶ3ՔQ$`.fȎP%sA8X OW JHTE@VL0:),.P/MB]pz|A2^9J1Hw\f1[;X@RG3PO%<Ur3Ptpjr @A P8ՙ+&R|t 6~ë'eT#8{~O+|eIJ8|E^w%4t䁫`7Ie > f&["5Bg0 0U30X3+VĨ 6;)KaY rRκ;F۵gmێH@J4(m.>;QʁI)8cN4*R8,fΪ^9l7x9YsBLZeJ0ĭ-㗅OHj\-nKǜj*,KMG\{.=rO,W¬Kbۮ[9z%JU0S !Uk*ho,SbA`E}qiRəiIH .?H7ܱYǙw^o\!ӎy%`ebH-tؘ'o\ *'axo@A;ˑRR4(RMTKnrt̿5NX( #>OjFSp# ZK iFI)S*1&e^!|t=|%GwMY8J7׼Z+!N'ҝuWHtjL}w>/fFH dGG Z܊VRnZʞr `@(f&p0'P C1)o ǥs`Rj7aM)=!kPܶK/MД:Zy^ZUZ6.vZ_e2xM2S~%ɬ5Wԥ b4VZCTM#?あcp,WJ[%=ws<lS^ 0Pxz\ PPІMZfhU麥I;quq n> PM'S^석ZLjʖV~\č70>lecsng~UwC>fB͋]77Sm?IkpZ4& 'qP@xY}M*^@MH LŎYLq9V"ZէjRqۄA‹q/xY\+"Oѱ7=ՙmw#j3bU3~2^=ĚtP] X@*s|m݃[r!&ÐCpRambl&g*Otuo.~PtNOn0ҷ>EQRZ P{sJB.9/b YWyqBлwhxIIB0u`eTp8NѸ ݜ ~i8P%]P2x2?ǕshЉl ?6 j@2oJJE_g+Sڥ Jǣ($ \N1WB(w+X&wl>R&}E/C$3%wG`hU^@K`؋ZP9ka =zI)݃,{^:sƑVy GY!ɴhMK?sF.ABC1R])ɞVnr; ;07WV]hw*`r*}y{BZ9s֗ඡј : a$vҦё{vX;RFi7TP\߾ fkDn.LDEO_^Fӧ;nRq1071W|q?Ş'+F/*.:!V7Mbq'.:}gUg8d0#kh˥AҊ"(!XSESfz]})kԜQUaDIvaYLTc]nZ֕/-#/D$}AMbE PP}rdo{8f_s+(ރ8-L .+,<$j4Mcm^NQڧ v3*O\8}yl/$gԙl@isUW< gvc^^yl&ۣ,ɒldq%YjIlRbib^`y8jH}krxe4mJQzqoW.xo_} 2p8cå&?OdZ6knQ]8jU: !8D%""=d|g8I~y$)u~"wY'-4x,%ttf>j\D @=S]}m}&ɑ ~݈/@7 )GPLH߲@7 ,2eYd(]Q\PhI4]Xd\=W\ ;E?(%b%%Ķ9p=[Eh"|< 5%Re-P#cAEUec#a449giFiW-"4'O$QL1J u90[ԾPRIRܧFR "pƊ< x 0S/rP  2I"d+5E !ױݭViٓ`^!a%q VZ@cLSu*SɸznSi0^PeSTFXhX{]fY@/,IK+͸5iH*Z5z`<1PW%0gvP~6̽dPuA0 EbЦaBu` Ĝb_g]x \Gm&L+J,EO xOQTW{޺[ (w\^"^V.SThuA Q Skpgf0Μ5<7.~J,USOITrj(E00>x rU0R]P;} l9DKny.Cȿ6H1Խl79p8\8Q 2Q$*rH(; _F`vt8D YN`tif$^+tP`SRIVt%AƔ+2NdRWfI|>,@c.Qg[_qJ6̢W:*#XCT\9MZOoO,@^2(\pq_\Jhơ6eljJ) oLq rHNhϡqz12>G R8CqK$ z,CQUJHbCUk%J Ü+l̮S&(%Jg-uOϵ@.:7*^JQ6l`T."s̕ ?(y-DaLCb 061n$b'eed2{1+?(KA幛٘?!.|}56DΧfp9)3$36 'n$GœDFR`A==XPj$Jl:^ȧKÀ՚]7mQȝȻvܼWoBxf"eZ*HQ([L9וh7TRbd AtUN&(ed-+bhfiU2$*nv9~"aX+z5yi"b᥽$'кaNc;6c/xeaDֽ'P[j!0!x򑫳''uљ$%$%e@1e=, R$qV`0\^IgYi o91jd:u.BFFHbK{:?ĔJi5ᗣQG"Hr^`30#.;x=*̬.t@9#Z iUDѸv8*)ex9D} %M_ 8"ح VH%r9U1F7OD3>եѸMV彼| 9h"I]ԭR[h|D@ ӓNuoO{ sR @c)`D`@3RBUL2LEdg"ܲU91A:,8Q988#q),DGR1b)+S|f-&H d/hP%s1ԖW%Z9 N>RMv .T`>.{y#Yc1g 7y#I-RFiά4Tpq?U3%\RAkޢDJĘ9$E6iS:(L/ Yİ;eٽ%(2bDbdƎ)12}+P+H:?E)0-(@JQq kɥfx8BkR0Izegc T)P^.H MA9 R+ sxMSNgJS[8bCu񓓔:)  I1`8V?)zV^]E! !8ϱy3=SeѢx5oe|u}cewJR\ K5\L^ϖpb\{=[|[+TSmYkT NQ~ypm(j</?ڍ?dxWwC1$96͜W"gG{Z o~x[?}/s+ߏ~^{MO@51<p[]t mr$hAy8yES3#¢W?J FQwP,R3DWOECpR1cFD% %*5N+:DҐË"{'wSsu=wLg5f<;޵ROo η7f$tof6=m(IYibTolFN'0.q\xIH u4B#kf&2qjG2DT +Ւq6]xr)mT#;կӃݜͣv_F,w7td6:aMk8G՚{~Z[׿"99(ғfT>̝3OLuÒ^.&p{ 7+;.⼥Jbez|rw՚CD8G9ܖٖ ]^?}{zyᄒ|c<7cNK$D5ы؝n#t+4vU)%695"WݣhdO^%%,~ɒfJIhLI $^c>/_cȝJZGjRr9v֡3 hwO~"WЯ(}{t^/_daniumO<-Cg{/ޙ\;3!y*O袺xuxKNہX;-q6og7!|'?}[`\DXOtvvNKȕPkl˗)8WUE_>zӖB8jw;|6k9WR~.IBW6"9gDFzģ\.mrP1GcQC+H^u~J'#bBd8m8qr}w_.&%0]L\?󗋫N~g:,z Pԟ~@ݑ85)jaa|]皈\pH bE#[:qce jL9}ʡID>g $Nw2l?cAtq||-8]: +H903 xLiVu(2+ql$v1~TE߱a>=*OT%]cٱJXΔh. DhmVQk GյKF >ŗFVtg_Zm`ӕu7{~:M_S6]yDgll^[6o¾ 6W?^->bF?kkZAC݇|z+N^˭K-#Huf9a.^Zh#pjoS7)[9GUꦖKիjO<#&FH#fq4z8.M۝SRcJ63s_(ے鯼=[c'GB=ou"dELc4Dž*=~$-9]'Zt/.?e.gL]]7M|@g0BKb&ZsW 5M}t爯2BeWUFOk0BI[ 8p8 }v*k ,-tEpVˑ5s*WslЛ_1TR\U5uER4z,\.c/<'yb0&zp*Bs*Uiy;}z׮1inAH+.?eIХ>e^XZQU9m 'Oېl㴳8F񶇆Q:u O!ٴG ZQVx;Dޠ7NMm40z j3ެj'eA/)ME&w$D .].՜'a8*c$z-68Lu'Y]f1(vM G[PNM'C!nO#%+u{mZTʔ Oׇ>C*bT j hgOϟU5Sn/2=s5/Pb?x{j%=r:@5M= !l]Km/$ @8#V8DOo< ǸuJz?T1Jُ͋j c $G>4"'1XwՖJv,JQ;%)B:2S"L" * !ҝGoM 5I5k^-TIJf^q3R:T%uf{Sl/FS(ubhΌ=$PZAJK]ɱSIZ;3O]7.|W*f"/&$caާo8J4dͩ 2Y&[fPNvW.2%L rYkn w,~Ȭ/E1KsWR azK$bSh8)NҼ 3+qʝw*D|=N lB:D^Hj/¶J[?^:jSҕ4U\.kcݎo:=ڼ^ءᱜ΁jywk̳86"'i7~Qb9C?nྨɯ`QP2ÔkƑI #MBR-IMHJ+$l4{SZV# *hSIZ:ˉ%5 Rsk HtǶπ6Gҩ٩(FF>9Zz6q2Z-Z[ISK!0|OT)5XmomǶw ?阌R񳀲s8U+YGo'Ӕ狉lTie{pIr!!3ČC$f @\2!yua(`Wyg}ay pfqf>>|Ba4 w| 2Y.,_zԂxވ?vիx[ WK-ni3^d0,Z0BeoX"*4՛Rl4EJUݛh))̏."9eRTzjI9F)5͠xM7riѦi/W[ h 3KDJ'ZᰚƬґs/u8ʩ9hJfFjE uN2, Rx=EUbt*:qDP=e*@h<"! Z4{g¨RMo\z\fN}=I꒮Tw)w塃[2fS:ܒ-JR[ rKv9 M^N0Aܤˁ|U>!J߻ntZfޛ7H6.:yo44E_B +rU[OeYpƿ|>Mp ! ;cDteEc7 u8\@ ΣGY&?ְХ<ܭ4 6-_mEO\ "d-R).>>9SݽϜ4㚳LѤx.d[1woqVyNՁ`)ΎG?)?L>2}`ZHq5=6 Miܗk0}som }lbGۉA-7M|m ngoEl:m%EKȼm[ yۍ+!( _PPa :*mmF%_ !SjLjd?Qw[VP7DGDYՕ٠^:,򄇇^܊P]Uh; 4R@u@iySW12 SQ #~T~h%OFD!-{ozi?L8匟5SufP*;M3tM9T@w{j1bm5rC_ "BA)q]q~scnaM:)0˽qJqRZB$'y (.^PIz+QSr:}L*ju3 _fS&wqI/~,qJP:ͪ /k*Ü ё(s]p>(a9¹TZr3Rڨvַ J[R+Gf,TZp-οEk(pxHmR3Kls&p7ɀIMـ+oT*8Vp XJs R!\Rp蠘M}QmWUunXˑ#,Ĥ ױ%r!Ls)4ԑHIFEiR`,>vj3,T@h3Y8x+RDp̝ѐG\2H!8 !r^c΄!͑J I@vh,FxKjsi"yfklȻQU1xx&5g2E4Fۏ0E|.m 8h]nAl8i6 R"YtFf)"C)2(WTy,ml\ǣͯ0x6~!Sm*""_2A<9W&G x7瓎zʜCB+O\ոXL㟃%!h 2 Go=5n9m c4%Bߪ \&a9񙶑fU@1myJf KU%UT]ʢ8ۓ_ Ǭ-r[ ]9/ "E pyPU+sOUQ5&cv2+X&HjB:.Ԁ{XȵӞk@!aQ eAz%Z@taQPJnvTvRbLFz;MB[\DV]E[ "ǻW .Wa)Zk%&4h:P|e_D15irQGQ/nTί_㖊HP-tjTG3J|]>Amym\ʽQmcǷ="@_!m(;[Qe: oJii9۽=z/1!R4`XE2D`hNu0"9!-ϭSSts wĉ\یlPqu/P%풡ڬ%CZDsFԒFH%U*J ;k.Cf"WyKk񐕛J˸}SAhIG!dƥސDˈP".wxF69ڒ:jﬞWݮXuW7<\ϕ!FD<8;&!H"=2Q\&I(+QvqC„$G ,j`Iط7~|͡!b±@\NXmuPDΔ,"]& pz\vr&[ˬF2D:V.M,1V$ MDb#oÒ2&"FH 82 pG#24W)|i7Y {}v 4!,,q:B QӀrIG aRf( U!,_mˁ^q-mWVuW i4$Zb ̮`hkab&lF(+s듉}! )̎XZ9@V\ %K Ѿ1/vN•T +,T(rے6}@(̖/F|cz&K:W3^CXbqa Ho#QJSe r_\K/ 7Tjr=K`q |M.iՕM]}/8Zq+.B!Ɋ ȅ Rt-oT׺b.Qkp-m8L=sueKkCje=7dS5|MQR$PTu%B=0k}uVHI"z@eix$9 UFG1%Ў@S?53&HD2 q`Hb]9v3*Vחy8;zɲFZM)VǼL҆d,L+' >3yr/~ |e'n-婋-lr?@p4BrhCX"kS(`RLdro0}N@[zg tBa$[vOqiң .XL4M[%Ҳi-XSvڝ.HS t*A275Gte.[mC:ox 0uUI ^Tu#H2^N+h$PIC/Kc(wͣ0.[I^Ϡ}0¹-ŵ>ʩ8;e{D_uTrW*^|L& +H8p%,ӊ}o`ՙ$؛,^uw_>ROjpIbc>-pgR{K]^j E2 QZ9!Cn(Q$vfJ-P #R4)|LAތ-4KYvt3R1`9z\>Gd0JƝ-֊J[LRS II*X*a2#Ѕ38gFGűV|Bhq(bn]"a-~j: dQs:D*Ote.KS:LgyWO@}_qIv8NKbh1c"5QLzg`L|٤!f>g>`W#i8M##Sg( Ib$_W/ FK]T}7LWnJ] `~mNqip$7Ph ^r1u;O>݌~Jtz'.|͞Vn j~s}#QR, Qhe'ҤvSXll/uf f~}mdq6Ƈtxq껮EOW ( C,ekФm|<$`@P ryG5?X#s4,tE,B54X͐=4`%.zftSA7IP7nMhG"EX Qh-5:^@Sh||3`SG4ӆ)cEUl| ;B"+UXB F]eC`2zf*[Ivt]zxj% H,&/+d+I.? MRs&+w7;JѲ7O=nM[ ϛ>q2er?}nhTl!gyT(g%lAI=tkVW_ f:T"(} s]zA,@Ϟr¤S|1kÊhUd6ntfs[ntyg(Y$IJ/6>h&K݋c]0'Ӯ$c@} dZj]*dkB-*_8ݮ'0L1N?iՠ1ȥ4,oB;R.qvZ$7>w|T_5hT2WG~rxjš){4ՠw%ֽFZk (^YD_A=} g_.;3 Ps/,_mgN׬Ϗsg3ꚳw3V{w>f_GY ?$ѯfO6BK)5Z6?@yDm Za^Vzw+Quÿ{k9 \#YzpRs ?v:+a~A^m hQrTGwt96T?=?B۳A#rņmzZ0ޡ>CTMu-.xz5.R7\q, i·t:70)]8apkAG8h{r2GB~Hq{p]D…)M#%!bFitaKz3-jE{QCv+qe$ߘ6&gM%cl`ivE!46"bDdb[liڐjK #rM(֛m,\ՄjE,OwPaL1Ba9$^ yi&Դ  >N=ȣ&2۫fx4f=^--|I4Izoyõ ?_4#72 &tDN ewZ,tu!lm m>dodݻ\un$/SXR@7{(߉:$9#B(! MX n}CQǹyUz-M$B6=BACm8_s87H)BAzDTP%J'NjH1=֐ǘ xо 1& g̈́[juZAc+) ]I="PXTEPzZIBc= fjoC B⡳Tᚁʍp!P"sգ+ḧsOr|b}B}&B FM:yΓd&aA6%HKN/2Wg=Z |3qU -{X?8E:O۫fq=Iu~6;y0x 1LBҝmd$b3FMR2$ w,W׻ee, <-9w(m+ %6P fiEx SnOZ\L0b4FZ&jLdhd( RR19_eC]Ɠ 5'Js6Jd{J]9S*1Plhf K9|>zHÌxNek: 2I-dcau?@p4@,4*ІB4j"xuaa݄td _O&{ QzY+ CnZ}JwoSVe,mz &;B~U{ w/L%ҫN<~=1c,Duwm㼧!!˭ɔ=p_~~qVwYf6S`ĺBgogLϫ2YǹD`ۊn`w{772řR#D8W.tK^D̖֕žOixʨdAeM7|-|h:c'CGJb`)cuRt*R-Gm:T$Vz;'(yAQILwJ1-#iA#)g\8-rI wH› 8z˸5y2!j̤ Rh K$𞋈 +u /ƦVO6vI"6vK6AH]sFDGC 12F,[G (llט#B7(=ȧ2Ni::vy643{ b, b̷Ҭ9,Ҭ=%:\I"QAi+ʂBT ) >h͚r&#A$/(cs:ǏFEx\?,0]Qw3wEKڰaNVD ~c7^?1mu?>.f|݁%1\=|-[a2i,IuT jIWc*{8O+  '8gҝ6pTg}]fJ܏b~7I^,0ɕ[ԛnͳmM3;ygCtwS%?u,]L93Lfwu~06p vmfo~ %#lűEj͗aյ+\GeYqܷl/:SRIMn1,&[)|:hR̭~-d)*BcF"+%UǠc<漖\kQsC;6nTd꾔Gf#rh`˂U(B{ q}%[J֠ m+MZBQ-ۛj `tS^_"4j#)T8z;;nÅENqlq"nϡVfR5o$N&4%X6A]@Hh__5Hm̫4-9#ԙ{ݓIRrY QU$ws7J`Ş+IP8e'Vr Q&8ęlI3FG|d~7?>/kj(l-@KV3} H^29m `H@4ʶ,(-j9qlr"Ъ7ch!& 6YȷB[ț7d>vf A2W Z qaD8*ˈT38 DXeRqϴ2ߠ܌>v)2F f1|[=,RWL/ZecV h{a_{YvVlVލ,/Kݘˊ(ްK2M)ZjƱG"@i!aPKcBkE(("F"bPTPhT5b, v)Yư- gK=Db8YdEМ"TsFiFv,ND[טڒ1UkZ{1DB]T72I1]@q[:F#i`2*Ѵ[jx@YZk%-D_1VrC-ZjBQDsA$b!fA-P@Dp֣٦*_FUhc HHUA{AMʶ̎N<x ~ פsy%IBBt<||Q׉T$'! LUJmߩLFvt hot8bJ1lEZa0g>nRL0)C gɚ?|_B[!K@jPU{w($c>Z̗+P(]2s1WaQ &_p@`]hꎣS@bA>+$hNy+)[j&JbͿkkݣ*ucp~1Wi纵J23LP3Nʱ;`\gGXR߿e݅b @Y 8G'zYR4X\$%`;wPa?|*ЗM| ( cc ##2J3&}RqpD ‰l_8EF~kQ F$ 2()uI9)340 MYvtNwO1 D, ;Fj4@uf3 "#9xi%.$řP/K5KSv Ɩ֑2k<9h4eNXE:/DIKJ/ӍϟavEʅHSMװo(3KǼh[iA5Ir.gFHQgy4T$PKQKg‚tP 1:_,+'y ^+P*qD ,]`| S(\(?sHS@4J \PJIcVKFʕa v7r,䁯 E4<޾MˇuDWOMhx~k˛86\ac)2n&6MlvgȐ,݂5 Spa\ac oVf4mJ*~_`065"NhLP/aӰ[wy]Ho% _ 5j,V_?7kT{b!PJrڬ~z:{$YQw&c;9B"{61u4ǻ~ .|>DV#0Fe~m w~0]R{4l,I`}z륐Vl5 Ck,MR_kOK?x*lxZM[x:)0ku n<ĊaRo(ZGsSCn- s'5Q7YmXFbb`qdo(cv.6&ыܞe(8 Ci##c9H?CQ" ֹ݃&:*PV\8FG8j5w탖\ߩ$ur%{#l~DVd- Oo`84M^ `qbO},;s_{4Okh"TsXݢ̼ed95/mŃ& Kܯ֒`1-/+T-C1LrF8/S)L%u :M`'J cCGxz'@B'E|mCbY޵JS1X%[UgFbH7򭒀v.X$ʕƝ8)=mPn^4cCɎTN/c=~ay[yX˼̬VHZf:x>9?F1:0!;n-nۛ@/sWGQ,Wkr:no^hmOO{7ssH* әJl{̖|Cl}м0@#OBL98$iKDzB8?9ǧOTiQw۔|A޿SWAE`lmQé[p~+3M[܋g͗,C{KJ]vyYy?,ϳ0[C`T F. "P0p8oAuunh(}@SsP1)aJLS„Yׯ՘r:C҃|:HڛHCH$k_ֵdne]JNܾ]<^}-m>gpGS5`k~.RGrv-X`s YP]c)z^BӮ&]޿Q_؂$AMYȘZm "ZȘ!J l9Ϭ9@b+tX<`*ċ)SEF#Ym?p6 =>K@G5Qp ^ƏZ5GHz*yY7VD=1ޘy̝˚/,kC 4$ݡqfd)@ ^$&ZwIRpؾD$4KGłAEc`EqVphUGk^$nHKȑpJ򵺕.ҥ{nR_DTPY(M/EJ嚮w'Z^g'-Ja}K%>.畳/rDV%R<5uoS-vNEHKB彖|A|s.Vszt\"N N&jM\$@]:3/ޕ] 1.#zTئ*"OVoFHGRD%Aɂ]6rln"}-"k{T URZkL4A̵J.nZJ !kMӶV!L$fhZdm-,k,^JRg:j9wF\?OfeZY,SzrDo\Iy ?Nw?-CLkzÝX*$ɡ.6eyꮔbO)K&.˒)Fm,yMt@V#j8" "Tk.9Y$br f~ߢ:$"TXXp"a.h7BI6!xqQE&, s.#Lh k#E"1.Ҕ: WFcgh`q$7F"A16<2R *.>x tq889Ar 4[KJ>gZ{۸_˶fl/!EmX7M`pHNƖ(iG=f$,ME ې|Ε<ЂK_RXJK+JN19qAdjycpksU'{9m!͂6'; Q৏S~~Pxpj*oGۆ?bk޻'~ц% TOJΜ2CbҥVZa)e4k RXQRH7ITvuf?Y(ڪǰCcrjG:#Ϋ" PfQ/X`\! X-O01 `3 64'I8ȁa )Tjds]p{PwՍ"s BQץϢHhMHK%mkŭd\A0?>.CU. oR|rMS~AFV_Ȑ)\ANKEBVL( $ '0*xg>,q$4?w2(dk߭$=1v@ )wf=UZ{=C[;ک+tA 8tpalȖXcl⌲h` AC`3v ߥ(S_ܬxh~R`bbZCyƁ>cPb$:T7[_ E/݁cq~1z~1T_L84Է٭_L{.-( XLɰX9p,q%uXxA %o{ńy1'L_Lv/wq=/d):< NQpzŀ:Z/MU5@J9SD\ε%(/3 \̎kzcy' G S'L^.=1\"& J3b+ClPZ%d Q%9. h/_ъHk5+4zo mޢr0 cp f 6RJ1/IIQB<4[ISuu$dew(gzal,=nZ $MCwgAyK׃pv<*,y0.33 w/21),30j[Nϫmy_i#Y߾ٛ5_+@gp4uo &zKH'b["F%ے c]Rh=RJkRS%jz(fahR'|~U%$llmK mKl5oQA KpW|ۻhvEb7`'jcb05oRV-1m׸ӹD| fnOaVkƶi;%r)-/Dd[~Q=Aa:}y ' nyֳf*R' ?\-n3p@skvZ\pbIfNz>ӊDo9h.k%u/ l+M/ƣkMt}&[݀%h4 /mSo.%¸k'4FnOm-!y_~+dlHJ6kfd ٖOeڑ@4/uFtAf-*PJw'׺ ]jYsLw͑ !Ph3.%L`D3#NϾ?l'' ] ?k{\a/v\}˟wvÀ>hX ڰNh 1}4UvCEx]֔ީ$hAv_r$[!(8˦|.%UhYkSneʭT4}!z2<.%MK0gK:lN%A 9[fQzI=HovKMpS.HTF37:ujy2ŘSSIFvcR9ŘA6s1w* Řd:ŘSSI/, &)Ɯb]J$ڄ]Ƙ%8ŘSSI`J/,%)Ɯb̝J0Ub*ŘScIPhڭ;1+Q1s@dzvUnz1w+ Ř$ŘSSIЪ5e?W1w* nZ}1f-iǜbJw1fO1w* S.,S)ƜbJӪw1fHS[I%:ŘSSI}1KLX1s@$]Ybq1s7Ͷa#O1C1Oiܻd{)ܩ$0wYcN1IoSOo.O~^˄ʛ=i6M/f/fn<o&ο"gR nBm˓{3;ǫ!6O8^֚9Yo=od|#_ߟ&FVLÄ8g@ܠ(##sߤJ W8`xJ p TH䔖10hH+]icvh_=n?AQ4tʊj=rqlL ~8q^ds<U>y?C@T~~ /ë| Żwů@JnOnӭ:;O2f"wX; !ZmRWB*,K-ܜQm]VN{edQR F -!G"PBR¾V1+ %`ü#JSRf86! B,_ )9 {!\HjҞCL9/GJ*H/*q)/"F{ qSpPsKb!K}!.Ls$uPVI"!lt//WЄ9KK$B zZbK@aSC8pGBH-`9tLyQJ8+@ZN(WŶ%J0KCtm!T@4@T,FWzG n  NY(7TTX@[Bw/`NrW ۠`F9,w&}Ǿ$,2 dI_@aO @ˆQnWj h :kP)Y) QY<rF>p:[-=̼q|:d8lh 8#$0h᭏;{ 6sh12ۏw>=W0ogn13h2D{/j?cNdLz9;|¼~s/|ìdyw.տ +GyR;Z\]oO)B_7Wf^Xdؿ1|?L3s "ήO%;Ӫ( `*{3o`3Um+p9~rpYbDLUrvx hr%!)B,4ܖ1S: 71Vm̔" ? 'h:Wgܮ^jW@x>K~>3@g^05,3>FjzqG THRNq9+ ε%(ޚ0ا͇ߝy=aΎ8grwnS`bfAtTp~:*@M\n0]$+&x}5cGdC6&wsd,3_^y$QdύW`\={WtxzR ?&68n0Īun͑UQl2sog\/ k&m>)%dk1M&U4h(<+,(م\_OƫM p,gy+FǠf+Ӫ kr?ס%ֵ(4X '`A}j}>|z6$dL᳧S6|v:<%b( !=gS=|FW7͑磷l1~7| FiQ|kО;=iF&#V:'`pђZXl3$MZFW<"q8oC4"D%ځݧ ip.lBdqoGɎX ؂q 4%Pll:U7EU˲\*9%@B}T^L(/`螔ݡle_lxvU%k7~۞a <vGͶ]mۣ]EۮB_6 _,ڋV͋Ũ-WJa 'Mpv ̚1e6zѰ< ~gɣ5 Fvhn4++T=鄟X>ӓ"D* zٝ' o7Ɍh%mπACap 2HcvP#vL*)jfQgވBee F9+k1b6YX yKf6+L}mD_U~Іe-Xh*]`*2UZpQZm&rʪ jR Kj&v\,KHVW;/z~[sn?h˶ q{=t{^q3Ma#<.ph7N£_\vNݷ$nz>}yU.|ٽiۛB$~ iWg7>T f~wB>|7ݲt>ArcQe'>`5p>Bb3!4뾝.6q(A3;ۇ&%Ƿ]ԟ=ƻ2>|^tt3'SRDhkD5\}0}ey>%*K>ks4#fRط(wYV}ڏݻwY$+֡)jb mY*Eek /,h; PSKNsA:s١|].QZw<Тo?LBٔuI;ӰL]ϴvKT FP}s)!DU/a3(# Npq OI}x aLJ,°>ރF&<vD0D3n1 s.@^6]ؐ 1Qf9FdBXR¸q%wx*)`B%ErqB)74bpN)z=“c/Q%$RX6$%"PzWsYqӉBdu*1R&/$VZHʄgƸ*7ܔ&u!W8adYx">9n O1IP0r1L'CJp^ml"Y /Q%+Iѕ`/Q%+^t51<O+Ľm.<Ý1/bPg4FȄǘ'^J9­ ӹU=bxoӧyGpsXM;9/`7>W.??ZEv"Ph#ǬmT7PQ Ў2腇V"] :ܔ8-@M0Cv g/zus9jfwM+p\f8~]VZ?$?װ윗pQ?HuӴ/,2QѷO9vSÅfq;᪄| KK%r+(v?N0v/Au[mIc^]*:)&Ҙw7os؄z_,ysu- \MURND9 o|vͻG?{ޛ; BO\WjN]/ω;Lx-v//EU@Y(ind~vۃnELDaվ]6[hBfStdHOv4x gʇRI*H#\ne(K%7AdE@a ;X8gQ3 ~f "%Bm&B-U7oxhdR /] B~'p3X;%*of3!/؋7u=1Bh6}6[.#ꬍʂjAO_[7xu}q/*im TUݺILL?ɟh,_IBKKLؙ/4M 6:mm,z@gtCxkѧ˥Y+(.u3K̥nKAfLuS;#Ҕ1z鹞Ւ^?=.Q _.dt%w F[36{T=_ns,N],iꋕҽp47~64@U&b.xy96#?wqfWT!AF: (0Ét9HL T]9țE쁼F?֙(aR$q1ۀ3$J@r3D! LM2zQTdE`nNUEquawFqM mOktOa-)UG+`ҽ|uBBRM$:"LR@Tb Ǩ^D-pÀ &*2N b:,]l`c4m!%G_J!0O3.X(*{Z\% *~QRPMdPVwUPPhif ~OhXbIAQWijNNA=1ȐYM8asK4hN1%rQB"M4ւ@r(h.H;'0;J1f4D# Pe@20o,8%8- 4IZ$Û( z.GG^U0n鏭Y]'͗kI;` ZzE 0V\.;B&iQFiIጒBP#)⟉_F1.<Zh"wt'"c_h ZU`b%UpK)W8( $_@3O$] J]gQfGŜ[EK~~~U-FxƏ.,lg1Z]JDv YUaR) 卪m`#S?pН8& ; դugYBߦ4~E`QNM{S*ŻfffVEz0RddXRT{J0gwH 4&A\=Z@ 98ѭdjޔ,8%N[/r:mM} ~y1@O~N+Bo, tkꩣtR=&20Lx8H1@pjʝ'*wQTQtjwR=A v^/o0TQbx't-ށI#.:ӐqeP^+Q*RDgLq-]ݼHC9 i}ֲkP5ߩy`+ fk7&r3:&^TcbD}zzۡwKkW ^u,5 GC.˲&C.Zv>˝ϕ~ƙ\%kt(0 ĂSMI'Fb0wv""h1ڍZdV'( 4}f0IwPƿWnt'Y\5<"I0Tjڴ>`#ܰ`:5k(GB Gp!Ѭ@s'sBP%yT@ھbD}xjfIf 'whܯ_@YA+`Fı̍{kP)/$:z' 6 Ѐ5èZɓFZ#rr<~ eAQJv_j%dfZ>|w]|S`x!8H6A8UD S-)#PIUQjy|e\rqɩOJn7FzoZLM&?mWf+JB{@|iK"_jI)VV+5k\z8}r&nlzr(R`5p%[3DBD7"Ӊ̡7mb=AN7} J_=\71%&"՜K8B>&ς+gUwNP\o2%1Zq4T{˨pQ#1\Р& CIRDJxp# hZ-tG}Wy)ݽHdeAxAS~ŚM׹HEq"$IY<-5e p՚fN$/PNCVݽe`AJm Wa8#EFc. Sᜮ1<-AZ #Q)Cݎ 51 )T<;g>bWpCD$_cv٘q՝%Qf%)s]ſ_vQ 1SIpe@#G׸dSm Ht"&ɞx ZpG4BEuk;wTcWjPʶQ'>6v%yObnmUI0&*|/2ڒJ@$^^p7WOtִT\}o/Ⳬ^]\WP5៽𻽸O6)[[sq4*[hXivob E|S;nr4L!NN<1%lt6QGܤ,%:52@HMfC ֚٪k| }~/x%וkBYRM$:1IQlk+zTf5q D uIJ0$g#ܬs\s_ XdJLMLj8DPu3 x㳂1ͶzT.J`)%U *bq֜;=xک\4ʐc(Jeg;(W~R {Z7 F닀1Ѵ6,yD/'K9>c2Li< | Ùp|;䩚uC8;z̅yqa$-<~HOq:-IȑhVjonCnM1#:Mv{@y*޴[cBjmHȑh#ljsWݚbPGt>c+TO5&vۆn,S:WyyzByv4bw.&ä$[t'YQi*eH. 5^+;jr3=w2}{C^$f{ !;7}˥6.ʚ4][9+(y3\j~eaBo#=b2^NmֳWAcGwez $/1ld27/16=hg&{b<O7UX$|u9i[Dp-I;"DN^L:# RP:K(sg*"ʔr E]TlQBM j)h?0l$1VF'<$N  j5s3Po6,\*Z.Ϳ.ЧONRΤTOU瀄V@i0glq=ȕw0]0Aٮj[G7SBQ^3!S"k4 ijuh|Aj ݢ+ޓP&嵞ܡd(O)D>[n4i-kiYbL00($WוdԊHT&`(pA915CEJ].)*e늨ZVc!h[U}.ڧmlȽf|4한 (h0"F4BEr_ WhJlj]MZI;Nf CR g8WB j<WhNhS{-Z.D!Q '6bO^1RN4sG "cŻA}  \1LXD]'qB?zn>H?KDB(2,^7y$y@h7Eǜ9젃7EOc:QtJ.ѽ@kUYO.kֽ׊+&}hZp"PAHw;b s0LpR>t CA y*S&mX7uK DuRcFtft8^*-кАWI:=M&K ,Qy#"Pr%Wˠ!\E[:HnxF|R(b5PvPE ^ 5aʟYTW!` Z+ɫi**W Ub.C fd}L #?A0xǔ!C$yc3gd0iwgsF–ezPrFbx#%8gdzp)nFpbԐ9[]WlpR+g$B~³ DfāGLV c$Ļ⽁!܈Z O?/d㏫n͟:W8w~qpv{7+?}u S@E;ʗ0b@F 56hU TF(Q )/7a(Y䃑^KE~_c|SiJ䢮+Y"q T BSo?޲dX}ۧ=94䕫hN5H!FoZ7HѺb:n]NmyD[ y*S aQdOYqUKfuY8Ԕ"%RBkdF WQeӬ"5qYRR+ +VIQ{!wggm(K~~pٺ B}H2' iWnzM{MfL+n/yC>t aרϋi6d"7N̽=bgENsBo%eɹZ"-bipmG6kiAZ↍uDYIJAR%r+UYMUQ3{8fhov AW`!nT}t~;|u9m[` )PH%Jb~o/o0~tϷq]AY /'IDI <-"YD'].&DN!EǞ%Ek͒"iPMeIlx_*HEǞ%Y\@&Y\@Nf=#+J42`=/a`*G)YEFr5u[S*< e>NRo447իQ-y4`*~RIy &if=B7~>z\Wo.* GknuG^>hkޚs%mX~~3ťM@}§0=0 AP @`[Sޞϭw7e%-_KY0mB"o[x1(E٫L5kne=$ljNr߾.2C Hp;^$5z|Yak=IAա]XnKn|bތbb$7 IF4"Sv9W[C&n_rf;ț(%@6ugjԛ`Դs$8!L=C!e#s_S^ݻSA#oO|̓fw>XNA~\i3g^Ϛ|=.hFSE\{*̭11,*˫Ra-Y#*JgoGq.Sy;ϫ][H}2G!`*e.N8_xY02A[$,`HL8-?\*1 \kJUJVP5ʵ-Hlc bh3: 7g\|;A_Wfr~H_>EVݣ`}ۏ[ӈȇW?}Ȫ|k'u]޼;v Ox싳K!0zw>B?\%Zw8ʵ7? $RL>}ݟ09 Nx%۞ R@"3$$_Μ]:n ce0AJIT67418W0Ç[mwl)Jw)B,p4#ALrI웧9E_wR4Z3-NKvƻؙ_q􇈙#%qw];1q`PF [;/k;ux:}:hW۞H`EŸYH$)b<$ jf@IvzA=HpX 8lA^&?\ XDMn'ȣ+?=Z;ѥ 19uV4$pvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003420215215140053135017671 0ustar rootrootFeb 02 07:27:05 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 07:27:05 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:05 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 07:27:06 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 07:27:06 crc kubenswrapper[4730]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.982715 4730 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988096 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988132 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988143 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988152 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988192 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988203 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988213 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988224 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988232 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988240 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988249 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988257 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988266 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988274 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988283 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988291 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988300 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988309 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988317 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988325 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988334 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988345 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988356 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988366 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988374 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988395 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988404 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988412 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988421 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988430 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988441 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988451 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988460 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988469 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988477 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988485 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988494 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988502 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988512 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988522 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988530 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988539 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988547 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988555 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988564 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988572 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988580 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988588 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988596 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988605 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988613 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988625 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988636 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988645 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988655 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988664 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988673 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988681 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988691 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988701 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988712 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988721 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988730 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988738 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988747 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988756 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988779 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988790 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988799 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988810 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 07:27:06 crc kubenswrapper[4730]: W0202 07:27:06.988820 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989007 4730 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989032 4730 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989055 4730 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989070 4730 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989093 4730 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989106 4730 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989122 4730 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989139 4730 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989150 4730 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989196 4730 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989208 4730 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989218 4730 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989228 4730 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989238 4730 flags.go:64] FLAG: --cgroup-root="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989248 4730 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989257 4730 flags.go:64] FLAG: --client-ca-file="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989267 4730 flags.go:64] FLAG: --cloud-config="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989276 4730 flags.go:64] FLAG: --cloud-provider="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989286 4730 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989297 4730 flags.go:64] FLAG: --cluster-domain="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989307 4730 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989317 4730 flags.go:64] FLAG: --config-dir="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989326 4730 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989337 4730 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989348 4730 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989358 4730 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989368 4730 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989378 4730 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989388 4730 flags.go:64] FLAG: --contention-profiling="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989397 4730 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989407 4730 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989417 4730 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989427 4730 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989439 4730 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989449 4730 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989459 4730 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989468 4730 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989478 4730 flags.go:64] FLAG: --enable-server="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989487 4730 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989501 4730 flags.go:64] FLAG: --event-burst="100" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989511 4730 flags.go:64] FLAG: --event-qps="50" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989522 4730 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989534 4730 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989548 4730 flags.go:64] FLAG: --eviction-hard="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989571 4730 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989581 4730 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989591 4730 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989601 4730 flags.go:64] FLAG: --eviction-soft="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989610 4730 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989620 4730 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989630 4730 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989639 4730 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989649 4730 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989658 4730 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989667 4730 flags.go:64] FLAG: --feature-gates="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989679 4730 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989689 4730 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989699 4730 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989709 4730 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989719 4730 flags.go:64] FLAG: --healthz-port="10248" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989729 4730 flags.go:64] FLAG: --help="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989741 4730 flags.go:64] FLAG: --hostname-override="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989751 4730 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989762 4730 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989773 4730 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989782 4730 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989792 4730 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989802 4730 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989811 4730 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989820 4730 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989829 4730 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989839 4730 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989850 4730 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989859 4730 flags.go:64] FLAG: --kube-reserved="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989868 4730 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989878 4730 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989953 4730 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989965 4730 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989974 4730 flags.go:64] FLAG: --lock-file="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989985 4730 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.989996 4730 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990005 4730 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990020 4730 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990029 4730 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990038 4730 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990048 4730 flags.go:64] FLAG: --logging-format="text" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990058 4730 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990068 4730 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990078 4730 flags.go:64] FLAG: --manifest-url="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990087 4730 flags.go:64] FLAG: --manifest-url-header="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990099 4730 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990109 4730 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990120 4730 flags.go:64] FLAG: --max-pods="110" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990131 4730 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990140 4730 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990151 4730 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990188 4730 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990199 4730 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990209 4730 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990218 4730 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990240 4730 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990250 4730 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990260 4730 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990270 4730 flags.go:64] FLAG: --pod-cidr="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990279 4730 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990294 4730 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990303 4730 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990313 4730 flags.go:64] FLAG: --pods-per-core="0" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990323 4730 flags.go:64] FLAG: --port="10250" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990333 4730 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990343 4730 flags.go:64] FLAG: --provider-id="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990352 4730 flags.go:64] FLAG: --qos-reserved="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990362 4730 flags.go:64] FLAG: --read-only-port="10255" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990372 4730 flags.go:64] FLAG: --register-node="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990381 4730 flags.go:64] FLAG: --register-schedulable="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990392 4730 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990408 4730 flags.go:64] FLAG: --registry-burst="10" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990417 4730 flags.go:64] FLAG: --registry-qps="5" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990427 4730 flags.go:64] FLAG: --reserved-cpus="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990453 4730 flags.go:64] FLAG: --reserved-memory="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990465 4730 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990474 4730 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990484 4730 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990493 4730 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990503 4730 flags.go:64] FLAG: --runonce="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990513 4730 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990523 4730 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990533 4730 flags.go:64] FLAG: --seccomp-default="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990542 4730 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990552 4730 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990562 4730 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990572 4730 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990582 4730 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990592 4730 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990602 4730 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990612 4730 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990621 4730 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990631 4730 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990642 4730 flags.go:64] FLAG: --system-cgroups="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990674 4730 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990705 4730 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990719 4730 flags.go:64] FLAG: --tls-cert-file="" Feb 02 07:27:06 crc kubenswrapper[4730]: I0202 07:27:06.990731 4730 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990747 4730 flags.go:64] FLAG: --tls-min-version="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990760 4730 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990771 4730 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990783 4730 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990792 4730 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990802 4730 flags.go:64] FLAG: --v="2" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990827 4730 flags.go:64] FLAG: --version="false" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990840 4730 flags.go:64] FLAG: --vmodule="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990851 4730 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.990863 4730 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991696 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991759 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991769 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991777 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991784 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991791 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991798 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991804 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991811 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991842 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991849 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991855 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991866 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991875 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991882 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991888 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991895 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991930 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991940 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991947 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991955 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991962 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.991982 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992010 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992016 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992021 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992028 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992033 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992038 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992043 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992050 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992056 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992063 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992068 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992095 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992102 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992108 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992114 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992120 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992126 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992133 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992139 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992149 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992196 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992204 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992212 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992219 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992224 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992230 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992235 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992240 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992245 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992273 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992278 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992285 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992290 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992295 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992300 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992306 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992311 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992316 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992321 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992327 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992352 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992357 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992365 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992371 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992377 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992383 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992387 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:06.992395 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:06.992406 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.005407 4730 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.005442 4730 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005569 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005581 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005591 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005602 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005612 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005622 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005631 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005640 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005650 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005659 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005670 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005683 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005692 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005702 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005713 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005722 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005730 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005742 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005752 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005774 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005792 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005803 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005814 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005826 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005837 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005848 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005859 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005868 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005876 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005885 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005896 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005908 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005917 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005926 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005947 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005956 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005965 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005974 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005983 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.005991 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006000 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006008 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006017 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006027 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006036 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006045 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006054 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006063 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006071 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006080 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006088 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006096 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006105 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006113 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006121 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006130 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006138 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006146 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006154 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006199 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006208 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006217 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006225 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006236 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006248 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006258 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006268 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006276 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006285 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006296 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006309 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.006325 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006599 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006616 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006625 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.006634 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008465 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008490 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008502 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008514 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008523 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008536 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008544 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008553 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008562 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008570 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008579 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008587 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008595 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008604 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008615 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008628 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008637 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008648 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008657 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008666 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008675 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008685 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008694 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008703 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008712 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008721 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008731 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008740 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008749 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008758 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008769 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008781 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008790 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008801 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008812 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008821 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008830 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008840 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008848 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008858 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008866 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008875 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008886 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008895 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008903 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008911 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008920 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008928 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008937 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008945 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008953 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008962 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008970 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008979 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008987 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.008995 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009003 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009012 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009020 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009032 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009040 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009048 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009057 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009065 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009074 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009082 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.009091 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.009104 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.010445 4730 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.016510 4730 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.016650 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.018648 4730 server.go:997] "Starting client certificate rotation" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.018707 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.019719 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 05:30:32.924253011 +0000 UTC Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.019844 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.048485 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.051763 4730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.052479 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.073752 4730 log.go:25] "Validated CRI v1 runtime API" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.114462 4730 log.go:25] "Validated CRI v1 image API" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.116969 4730 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.122477 4730 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-07-22-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.122541 4730 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.146282 4730 manager.go:217] Machine: {Timestamp:2026-02-02 07:27:07.142727917 +0000 UTC m=+0.563931345 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ce72a09f-80ac-4a93-a998-dc866b84ece7 BootID:978b4823-3590-49d6-b396-0b6ed8f87451 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:37:b2:16 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:37:b2:16 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d0:82:d6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:83:0c:c2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:95:bd:46 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:78:a5:7b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:b5:cc:88:d0:bb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:20:31:00:c6:b3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.146672 4730 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.146862 4730 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.149731 4730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.150144 4730 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.150275 4730 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.150658 4730 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.150686 4730 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.151228 4730 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.151880 4730 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.152299 4730 state_mem.go:36] "Initialized new in-memory state store" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.152490 4730 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.156801 4730 kubelet.go:418] "Attempting to sync node with API server" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.156845 4730 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.156927 4730 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.156962 4730 kubelet.go:324] "Adding apiserver pod source" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.156983 4730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.162860 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.162940 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.163337 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.163458 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.163705 4730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.165085 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.166961 4730 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170420 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170499 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170519 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170535 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170561 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170579 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170664 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170739 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170783 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170814 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170842 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.170861 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.172059 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.173090 4730 server.go:1280] "Started kubelet" Feb 02 07:27:07 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.174495 4730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.174436 4730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.177683 4730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.178094 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.183867 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.183947 4730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.184219 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:14:30.661670767 +0000 UTC Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.184385 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.184984 4730 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.185062 4730 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.185500 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.185728 4730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.185867 4730 factory.go:55] Registering systemd factory Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.185890 4730 factory.go:221] Registration of the systemd container factory successfully Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.186336 4730 factory.go:153] Registering CRI-O factory Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.186364 4730 factory.go:221] Registration of the crio container factory successfully Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.186315 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.186431 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.186481 4730 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.186522 4730 factory.go:103] Registering Raw factory Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.186555 4730 manager.go:1196] Started watching for new ooms in manager Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.187658 4730 server.go:460] "Adding debug handlers to kubelet server" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.187728 4730 manager.go:319] Starting recovery of all containers Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.187190 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18905d4a47f7c365 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 07:27:07.172987749 +0000 UTC m=+0.594191137,LastTimestamp:2026-02-02 07:27:07.172987749 +0000 UTC m=+0.594191137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207265 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207367 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207395 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207418 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207449 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207470 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207500 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207522 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207557 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207580 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207604 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207635 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207654 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207787 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207858 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.207951 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208011 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208042 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208085 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208119 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208150 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208221 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208252 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208293 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208323 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208356 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208406 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208456 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208487 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208525 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208555 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208748 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208786 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208815 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208889 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208921 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208942 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208965 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.208991 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209012 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209037 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209062 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209083 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209108 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209131 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209203 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209228 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209248 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209274 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209294 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209323 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209345 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209381 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209405 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209436 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209464 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209486 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209513 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209532 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209553 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209576 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209595 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209619 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209639 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209661 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209686 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209704 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209728 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209748 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209767 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209794 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209817 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209844 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209863 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209883 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209907 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209926 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.209949 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210018 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210040 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210069 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210109 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210128 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210204 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210225 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210255 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210279 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.210989 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211033 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211056 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211078 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211099 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211121 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211143 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211192 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211215 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.211238 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213643 4730 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213705 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213732 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213753 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213775 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213798 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213818 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213860 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213897 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213922 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213943 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.213981 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214004 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214046 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214102 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214124 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214146 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214203 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214224 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214245 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214268 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214290 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214309 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214328 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214348 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214365 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214384 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214402 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214426 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214504 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214532 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214559 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214583 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214616 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214643 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214670 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214695 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214723 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214749 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214776 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214808 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214838 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214866 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214893 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214944 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214972 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.214996 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215016 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215037 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215063 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215133 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215153 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215232 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215259 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215290 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215316 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215345 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215373 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215401 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215432 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215462 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215488 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215514 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215535 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215609 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215638 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215663 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215687 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215709 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215737 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215762 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215788 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215830 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215854 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215882 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215909 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215939 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215967 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.215993 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216022 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216050 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216084 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216111 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216137 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216206 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216234 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216262 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216287 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216315 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216340 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216366 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216394 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216422 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216449 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216479 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216507 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216531 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216562 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216589 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216614 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216639 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216673 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216701 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216726 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216753 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216780 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216809 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216838 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216868 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216896 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216922 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216948 4730 reconstruct.go:97] "Volume reconstruction finished" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.216965 4730 reconciler.go:26] "Reconciler: start to sync state" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.220481 4730 manager.go:324] Recovery completed Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.239014 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.241685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.241735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.241754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.242787 4730 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.242813 4730 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.242846 4730 state_mem.go:36] "Initialized new in-memory state store" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.249329 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.251651 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.251728 4730 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.251775 4730 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.251964 4730 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.252443 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.252519 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.259884 4730 policy_none.go:49] "None policy: Start" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.264318 4730 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.264365 4730 state_mem.go:35] "Initializing new in-memory state store" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.284889 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.329369 4730 manager.go:334] "Starting Device Plugin manager" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.329437 4730 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.329471 4730 server.go:79] "Starting device plugin registration server" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.330225 4730 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.330261 4730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.332026 4730 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.332292 4730 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.332325 4730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.344676 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.353011 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.353121 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.354569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.354623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.354640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.354801 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.355272 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.355453 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356372 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356516 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.356747 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.357878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.358183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.358334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.358470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.358860 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.359083 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.359220 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.360640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.360823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.360971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.360670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.361206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.361249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.361521 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.361582 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.361764 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.363253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.363440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.363778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.363259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.364010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.364036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.364506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.364693 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.366187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.366220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.366230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.387215 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419652 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419741 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419826 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419882 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419912 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.419982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420015 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420105 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420154 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.420271 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.430850 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.432599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.432651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.432667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.432699 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.433188 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521459 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521591 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521694 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521651 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521801 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521817 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521804 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521918 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.521969 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522001 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522017 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522038 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522014 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522065 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522076 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522146 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522223 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522229 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522227 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522422 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522455 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522503 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.522649 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.633899 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.635203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.635240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.635251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.635279 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.635608 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.686347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.704098 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.715475 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.737325 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4ae55751320d0bdc692eefdcdd2085bdabf19f032e4ccc840ad437e6e0bd6d05 WatchSource:0}: Error finding container 4ae55751320d0bdc692eefdcdd2085bdabf19f032e4ccc840ad437e6e0bd6d05: Status 404 returned error can't find the container with id 4ae55751320d0bdc692eefdcdd2085bdabf19f032e4ccc840ad437e6e0bd6d05 Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.737570 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.739018 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3632f89774d79068ee8defe09300454ebb839dd0b14aca341449afc13d24cc2f WatchSource:0}: Error finding container 3632f89774d79068ee8defe09300454ebb839dd0b14aca341449afc13d24cc2f: Status 404 returned error can't find the container with id 3632f89774d79068ee8defe09300454ebb839dd0b14aca341449afc13d24cc2f Feb 02 07:27:07 crc kubenswrapper[4730]: I0202 07:27:07.743963 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.744454 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-13deb847b49a664b8f3f26ed81df60c2754988812c38f75bfc95890503098c05 WatchSource:0}: Error finding container 13deb847b49a664b8f3f26ed81df60c2754988812c38f75bfc95890503098c05: Status 404 returned error can't find the container with id 13deb847b49a664b8f3f26ed81df60c2754988812c38f75bfc95890503098c05 Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.753279 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6e63ab3283bbaf3d9970a49ca9e9f5bb1c4d90614684ed0e161c98fbf8581ce5 WatchSource:0}: Error finding container 6e63ab3283bbaf3d9970a49ca9e9f5bb1c4d90614684ed0e161c98fbf8581ce5: Status 404 returned error can't find the container with id 6e63ab3283bbaf3d9970a49ca9e9f5bb1c4d90614684ed0e161c98fbf8581ce5 Feb 02 07:27:07 crc kubenswrapper[4730]: W0202 07:27:07.763189 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-91b1c79f133cb6457f454077f705103ce95f7df334a2aaf295b127ec055dd52e WatchSource:0}: Error finding container 91b1c79f133cb6457f454077f705103ce95f7df334a2aaf295b127ec055dd52e: Status 404 returned error can't find the container with id 91b1c79f133cb6457f454077f705103ce95f7df334a2aaf295b127ec055dd52e Feb 02 07:27:07 crc kubenswrapper[4730]: E0202 07:27:07.788248 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.035889 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.037413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.037478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.037496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.037532 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.038115 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.179862 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.184987 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:53:13.022622492 +0000 UTC Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.258050 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6e63ab3283bbaf3d9970a49ca9e9f5bb1c4d90614684ed0e161c98fbf8581ce5"} Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.259272 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13deb847b49a664b8f3f26ed81df60c2754988812c38f75bfc95890503098c05"} Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.260761 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3632f89774d79068ee8defe09300454ebb839dd0b14aca341449afc13d24cc2f"} Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.262180 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4ae55751320d0bdc692eefdcdd2085bdabf19f032e4ccc840ad437e6e0bd6d05"} Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.263483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91b1c79f133cb6457f454077f705103ce95f7df334a2aaf295b127ec055dd52e"} Feb 02 07:27:08 crc kubenswrapper[4730]: W0202 07:27:08.309955 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.310198 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:08 crc kubenswrapper[4730]: W0202 07:27:08.437604 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.437898 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:08 crc kubenswrapper[4730]: W0202 07:27:08.471106 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.471218 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:08 crc kubenswrapper[4730]: W0202 07:27:08.478139 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.478289 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.589835 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.839276 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.840860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.840903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.840913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:08 crc kubenswrapper[4730]: I0202 07:27:08.840938 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:08 crc kubenswrapper[4730]: E0202 07:27:08.841257 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.123523 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 07:27:09 crc kubenswrapper[4730]: E0202 07:27:09.127284 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.179670 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.187204 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:30:39.863945927 +0000 UTC Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.272350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.272435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.272469 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.272497 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.272386 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274327 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a" exitCode=0 Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274405 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.274536 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.275735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.275777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.275792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.277051 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="92d2f45cbef6e01232ac18b16f4a33e17322f16b83d15425e547cb63f4f4277f" exitCode=0 Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.277126 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"92d2f45cbef6e01232ac18b16f4a33e17322f16b83d15425e547cb63f4f4277f"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.277518 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.277982 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.278891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.278930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.278947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279801 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f" exitCode=0 Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279861 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.279913 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.281139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.281219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.281242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.284468 4730 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e" exitCode=0 Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.284528 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e"} Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.284596 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.285831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.285882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:09 crc kubenswrapper[4730]: I0202 07:27:09.285908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: W0202 07:27:10.171133 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:10 crc kubenswrapper[4730]: E0202 07:27:10.171270 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.179341 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.188213 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:11:08.675836832 +0000 UTC Feb 02 07:27:10 crc kubenswrapper[4730]: E0202 07:27:10.190876 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.288680 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="14de217bbf35067e9c677e08a8bdd93831456597f9a1fd2a7ca9383c06057c82" exitCode=0 Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.288764 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.288759 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"14de217bbf35067e9c677e08a8bdd93831456597f9a1fd2a7ca9383c06057c82"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.289571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.289616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.289645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.290718 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a5359afffc6f8581ade91768d6db7cfc13fec7245a3d5c01fb0815948e4619b3"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.290791 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.291766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.291787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.291795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.293853 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.293898 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.293934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.293952 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.294626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.294647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.294659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.303405 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.303344 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.303669 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.303691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.303705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7"} Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.304504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.304542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.304556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: W0202 07:27:10.311610 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Feb 02 07:27:10 crc kubenswrapper[4730]: E0202 07:27:10.311710 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.442231 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.444695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.444740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.444778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.444806 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:10 crc kubenswrapper[4730]: E0202 07:27:10.445371 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Feb 02 07:27:10 crc kubenswrapper[4730]: I0202 07:27:10.994350 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.188718 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:34:53.090177383 +0000 UTC Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.310670 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f"} Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.310792 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.312112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.312150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.312186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313423 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="221ef41c281b2c0b07c336e6f0cf0f905ae988bbd666d72144ead27449d4f7c4" exitCode=0 Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313551 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313580 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313607 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313614 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"221ef41c281b2c0b07c336e6f0cf0f905ae988bbd666d72144ead27449d4f7c4"} Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.313657 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314147 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.314997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.315511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.315612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:11 crc kubenswrapper[4730]: I0202 07:27:11.315627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.189629 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:24:43.1256523 +0000 UTC Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.321067 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"81eb9712204e71bf6cb95987dff48f5519ca3bc03e5fba3d9c7b3c09028e79ee"} Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.321124 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bbe3613dd21a37327de97a9dc409fb91e906e5027329b2ef9a2a18a997b563f9"} Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.321140 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"595d47883ee90602f1f73241ec8d6c3c63d8635b4a097cd1b84121316a7e8a01"} Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.321206 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.321249 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.323089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.323121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.323136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.367406 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.367562 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.368723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.368811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.368827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.414729 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:12 crc kubenswrapper[4730]: I0202 07:27:12.520421 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.097378 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.190744 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:13:33.345045606 +0000 UTC Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.330216 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"015ce6b77dc828689d6a21079d4f3b377af6e26111ab5a0d6460c5364df14bc0"} Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.330285 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fba20a237997b7d208e8676716e83196faed32cd3617534734b5b7d7a957805a"} Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.330319 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.330381 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.330389 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.331957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.331980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.332015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.455514 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.646452 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.648048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.648126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.648143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.648208 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.995056 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 07:27:13 crc kubenswrapper[4730]: I0202 07:27:13.995213 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.191101 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:35:14.95090628 +0000 UTC Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.282682 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.282838 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.283870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.283893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.283904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.332675 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.332772 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.333621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.333692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.333717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.334157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.334266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:14 crc kubenswrapper[4730]: I0202 07:27:14.334293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:15 crc kubenswrapper[4730]: I0202 07:27:15.191472 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:36:21.156502024 +0000 UTC Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.192533 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:21:02.936711233 +0000 UTC Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.776470 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.776689 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.778397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.778466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.778487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:16 crc kubenswrapper[4730]: I0202 07:27:16.784309 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.193375 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:50:53.303288494 +0000 UTC Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.339792 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.340796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.340854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.340872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:17 crc kubenswrapper[4730]: E0202 07:27:17.345421 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.606518 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.606686 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.607752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.607817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:17 crc kubenswrapper[4730]: I0202 07:27:17.607847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:18 crc kubenswrapper[4730]: I0202 07:27:18.193494 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:09:18.898847068 +0000 UTC Feb 02 07:27:19 crc kubenswrapper[4730]: I0202 07:27:19.193824 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:34:02.360572897 +0000 UTC Feb 02 07:27:20 crc kubenswrapper[4730]: I0202 07:27:20.194516 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:16:58.927074306 +0000 UTC Feb 02 07:27:20 crc kubenswrapper[4730]: W0202 07:27:20.948769 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 07:27:20 crc kubenswrapper[4730]: I0202 07:27:20.948852 4730 trace.go:236] Trace[1878773579]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 07:27:10.947) (total time: 10001ms): Feb 02 07:27:20 crc kubenswrapper[4730]: Trace[1878773579]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:27:20.948) Feb 02 07:27:20 crc kubenswrapper[4730]: Trace[1878773579]: [10.001079378s] [10.001079378s] END Feb 02 07:27:20 crc kubenswrapper[4730]: E0202 07:27:20.948874 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 07:27:21 crc kubenswrapper[4730]: W0202 07:27:21.066621 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.066740 4730 trace.go:236] Trace[979266010]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 07:27:11.064) (total time: 10001ms): Feb 02 07:27:21 crc kubenswrapper[4730]: Trace[979266010]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:27:21.066) Feb 02 07:27:21 crc kubenswrapper[4730]: Trace[979266010]: [10.001938131s] [10.001938131s] END Feb 02 07:27:21 crc kubenswrapper[4730]: E0202 07:27:21.066774 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.179875 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.195262 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:40:49.711209396 +0000 UTC Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.665289 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]log ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]etcd ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]autoregister-completion ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: livez check failed Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.665368 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.670908 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]log ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]etcd ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 07:27:21 crc kubenswrapper[4730]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]autoregister-completion ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 07:27:21 crc kubenswrapper[4730]: livez check failed Feb 02 07:27:21 crc kubenswrapper[4730]: I0202 07:27:21.670990 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.196211 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:36:01.149178684 +0000 UTC Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.363240 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.363525 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.364826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.364875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.364897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.374770 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.374975 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.376345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.376421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.376439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.410991 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.527841 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]log ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]etcd ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-informers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-controllers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/crd-informer-synced ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 07:27:22 crc kubenswrapper[4730]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/bootstrap-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-registration-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-discovery-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]autoregister-completion ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 07:27:22 crc kubenswrapper[4730]: livez check failed Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.527940 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:27:22 crc kubenswrapper[4730]: I0202 07:27:22.636999 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.197096 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:42:32.479045394 +0000 UTC Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.356577 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.358301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.358347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.358360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.995282 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 07:27:23 crc kubenswrapper[4730]: I0202 07:27:23.995375 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:27:24 crc kubenswrapper[4730]: I0202 07:27:24.197624 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:36:20.143971522 +0000 UTC Feb 02 07:27:24 crc kubenswrapper[4730]: I0202 07:27:24.359312 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:24 crc kubenswrapper[4730]: I0202 07:27:24.360562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:24 crc kubenswrapper[4730]: I0202 07:27:24.360628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:24 crc kubenswrapper[4730]: I0202 07:27:24.360651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:25 crc kubenswrapper[4730]: I0202 07:27:25.077484 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:25 crc kubenswrapper[4730]: I0202 07:27:25.198040 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:00:52.324513282 +0000 UTC Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.199112 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:25:49.657231911 +0000 UTC Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.670759 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 07:27:26 crc kubenswrapper[4730]: E0202 07:27:26.677176 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.681208 4730 trace.go:236] Trace[386479280]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 07:27:16.423) (total time: 10257ms): Feb 02 07:27:26 crc kubenswrapper[4730]: Trace[386479280]: ---"Objects listed" error: 10257ms (07:27:26.681) Feb 02 07:27:26 crc kubenswrapper[4730]: Trace[386479280]: [10.25789883s] [10.25789883s] END Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.681242 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:26 crc kubenswrapper[4730]: E0202 07:27:26.682658 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.684003 4730 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.684913 4730 trace.go:236] Trace[1945986231]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 07:27:15.754) (total time: 10930ms): Feb 02 07:27:26 crc kubenswrapper[4730]: Trace[1945986231]: ---"Objects listed" error: 10930ms (07:27:26.684) Feb 02 07:27:26 crc kubenswrapper[4730]: Trace[1945986231]: [10.930523935s] [10.930523935s] END Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.684932 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.700647 4730 csr.go:261] certificate signing request csr-lr59f is approved, waiting to be issued Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.715930 4730 csr.go:257] certificate signing request csr-lr59f is issued Feb 02 07:27:26 crc kubenswrapper[4730]: I0202 07:27:26.861871 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.018637 4730 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.018835 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.018868 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.018894 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.018900 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.41:49858->38.102.83.41:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18905d4a6a12202f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 07:27:07.745140783 +0000 UTC m=+1.166344141,LastTimestamp:2026-02-02 07:27:07.745140783 +0000 UTC m=+1.166344141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.169439 4730 apiserver.go:52] "Watching apiserver" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.171941 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.172286 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.172730 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.172857 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.172862 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.172959 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.173105 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.173345 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.173456 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.173775 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.173461 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.174677 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.175733 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.175737 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176387 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176408 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176437 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176462 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176579 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.176652 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.186699 4730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187583 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187615 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187643 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187675 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187707 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187731 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187752 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187782 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.187811 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.188192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.188936 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.188991 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189054 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189257 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189878 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189963 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190701 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.189993 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190772 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190799 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190827 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190849 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190876 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.190902 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191006 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191072 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191133 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191176 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191224 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191257 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191285 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191314 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191386 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191413 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191442 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191468 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191495 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191518 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191545 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191577 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191602 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191627 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191705 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191714 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.191732 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192135 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192186 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192352 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192528 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192547 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192569 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192585 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192628 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192648 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192765 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192783 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192799 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193138 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.192806 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193025 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193337 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193507 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193533 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193607 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193767 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193801 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193871 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.193936 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194098 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195829 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194224 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194350 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194389 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194395 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194499 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.194821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195130 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195561 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.195811 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196050 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196224 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196237 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196307 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196479 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196686 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196700 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.196718 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197104 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197128 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197261 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197307 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197333 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197352 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197603 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197623 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197724 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197790 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197812 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197831 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197868 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197889 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197907 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.197944 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198057 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198225 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198300 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198339 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198373 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198483 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198516 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198552 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198585 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198620 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198653 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198687 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198718 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198781 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198820 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198853 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198885 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198916 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198949 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.198984 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199022 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199055 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199088 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199264 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199316 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199351 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199388 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199425 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199462 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199497 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199532 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199565 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199601 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199655 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199667 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199724 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199728 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199781 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199805 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199831 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199879 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199903 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199928 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199954 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199985 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.199996 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200017 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200022 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200009 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200117 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200155 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200262 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200270 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200384 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200406 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200433 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200458 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200480 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200504 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200555 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200582 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200654 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200705 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200727 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200774 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200798 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200822 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200844 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200867 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200890 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200935 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200957 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201000 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201022 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201043 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201068 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201090 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201114 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201144 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201189 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201235 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201258 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201308 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201354 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201377 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201399 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201424 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201446 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201493 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201517 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201541 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201563 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.202958 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.202994 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203016 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203042 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203066 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203093 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203119 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203143 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203202 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203227 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203251 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203276 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203299 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203348 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203371 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203393 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203419 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203446 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203469 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203492 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203518 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203565 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203620 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203673 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203701 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203731 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203759 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203788 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203836 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203937 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203985 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204001 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204017 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204031 4730 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204046 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204060 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204075 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204088 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204103 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204117 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204132 4730 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204147 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204193 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204212 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204226 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204240 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204254 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204267 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204280 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204293 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204307 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204321 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204334 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204348 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204361 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204374 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204387 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204400 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204413 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204425 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204438 4730 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204454 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204468 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204481 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204494 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204509 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204522 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204536 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204549 4730 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204561 4730 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204573 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204587 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204600 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204613 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204627 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204643 4730 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204657 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204671 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204686 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204701 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204715 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204729 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204745 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204759 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204771 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221767 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.224428 4730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.226578 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.228095 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.228117 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200674 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.200954 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.201642 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.202074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.202364 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.202420 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203525 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203673 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.203685 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.204322 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.210496 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:03:04.097582922 +0000 UTC Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.213255 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.213502 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.214046 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.214309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.214675 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.214836 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.214954 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.215123 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.215521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.216776 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.216944 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.217036 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.217502 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.217753 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218191 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218229 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.217896 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218553 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218668 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218725 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.218947 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.219368 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.219847 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220103 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220365 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220387 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220473 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220510 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220877 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.220948 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221393 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221448 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221750 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221758 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221940 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.221985 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.222718 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.223329 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.223927 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.227517 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.228271 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.247798 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.247816 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.247945 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.248042 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.228695 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.228920 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.229024 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.229810 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.229834 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.230240 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.230325 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.230516 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.230582 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.231586 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.248205 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.248223 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.231869 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.232025 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.232512 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.233778 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.233832 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.234211 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.234398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.235045 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.235130 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.235233 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:27.735207944 +0000 UTC m=+21.156411302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.248566 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.236001 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.236286 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.236312 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.236313 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.236736 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.248651 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:27.748623573 +0000 UTC m=+21.169826991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.239593 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.239667 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.240086 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.240535 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.248698 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:27.748689615 +0000 UTC m=+21.169893083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.240781 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241293 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241520 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241743 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.248744 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:27.748722896 +0000 UTC m=+21.169926344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241771 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241825 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241860 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.241893 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.242265 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.242311 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.242688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.242713 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.250844 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.251242 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.251264 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.251276 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.251327 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:27.751310115 +0000 UTC m=+21.172513463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.251934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.252580 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.254486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.254536 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.254679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.254731 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.254971 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.255605 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.255739 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.257465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.258026 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.258169 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.258404 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.258771 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.259687 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.260027 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.260343 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.261237 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.261942 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.261942 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.262544 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.262842 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.263550 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.265154 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.265250 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.265289 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.265462 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.266946 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.267342 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.267902 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.268099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.268204 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.269185 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.269270 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.269429 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.272857 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.273205 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.273660 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.273702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.274397 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.274569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.274866 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.274929 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.275848 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.277994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.278449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.279273 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.282044 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.282549 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.282545 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.282643 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.282927 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.285120 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.288588 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.288701 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.289967 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.290670 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.292561 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.293345 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.294174 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.295865 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.296831 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.298254 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.300213 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.305189 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.305922 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307273 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307303 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307326 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307341 4730 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307356 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307368 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307380 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307392 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307404 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307417 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307429 4730 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307440 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307453 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307465 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307478 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307492 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307504 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307515 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307528 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307540 4730 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307560 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307572 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307584 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307596 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307608 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307620 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307632 4730 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307644 4730 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307658 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307670 4730 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307682 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307694 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307706 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307719 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307731 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307743 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307751 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308204 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.307755 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308295 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308308 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308320 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308333 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308345 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308356 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308368 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308383 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308395 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308407 4730 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308420 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308432 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308462 4730 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308525 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308605 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308625 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308638 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308652 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308664 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308676 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308687 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308700 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308712 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308723 4730 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308735 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308747 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308758 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308783 4730 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308794 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308836 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.308890 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309466 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309545 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309598 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309629 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309644 4730 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309658 4730 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309672 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309692 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309705 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309718 4730 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309795 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309809 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309823 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309837 4730 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309849 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309863 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309875 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309887 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309899 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309912 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309925 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309937 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309949 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309961 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309974 4730 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309986 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.309997 4730 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310010 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310022 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310034 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310045 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310057 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310069 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310082 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310094 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310108 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310120 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310132 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310145 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310174 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310199 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310221 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310235 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310250 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310263 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310282 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310295 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310307 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310319 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310333 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310345 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310357 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310369 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310380 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310393 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310404 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310415 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310427 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310439 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310451 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310463 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310476 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310488 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310500 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310512 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310522 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310534 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310545 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310557 4730 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310568 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310580 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.310999 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.312204 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.313046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.314065 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.314926 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.316395 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.317009 4730 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.318724 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.322456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.322797 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.323062 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.323328 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.324145 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.324749 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.325817 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.326607 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.327525 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.328127 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.329334 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.329760 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.330704 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.331301 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.332209 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.332644 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.333545 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.334018 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.334428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.335136 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.335625 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.336461 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.336909 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.337404 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.338325 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.338800 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.349152 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.357524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.369393 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.369955 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.371140 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f" exitCode=255 Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.371239 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f"} Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.385826 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.386153 4730 scope.go:117] "RemoveContainer" containerID="3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.410106 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.415289 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.415324 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.445753 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.464530 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-82v75"] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.464723 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pf2vl"] Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.464871 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.465109 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.467025 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.467378 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.467621 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.467956 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.468360 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.468673 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.473510 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.473795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.491679 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.503847 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.509094 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.517208 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-13e2ba7c1fc9cf114e0c2728d1e9937f9e939bd17450a5f90ca9c31aee51bf7a WatchSource:0}: Error finding container 13e2ba7c1fc9cf114e0c2728d1e9937f9e939bd17450a5f90ca9c31aee51bf7a: Status 404 returned error can't find the container with id 13e2ba7c1fc9cf114e0c2728d1e9937f9e939bd17450a5f90ca9c31aee51bf7a Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.517459 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75645fc5-304c-488e-830a-4564f86f3fae-hosts-file\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.517487 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjj4\" (UniqueName: \"kubernetes.io/projected/75645fc5-304c-488e-830a-4564f86f3fae-kube-api-access-gjjj4\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.517512 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46a89bb-4594-410c-8da8-6935fa870ea4-host\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.517529 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xrg\" (UniqueName: \"kubernetes.io/projected/e46a89bb-4594-410c-8da8-6935fa870ea4-kube-api-access-p8xrg\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.517551 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e46a89bb-4594-410c-8da8-6935fa870ea4-serviceca\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.523989 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.527253 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.528287 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b22fcf9a4a4176bfb25061e3a01943d3a3eeec3da48ceb4822e8ca944ab729f0 WatchSource:0}: Error finding container b22fcf9a4a4176bfb25061e3a01943d3a3eeec3da48ceb4822e8ca944ab729f0: Status 404 returned error can't find the container with id b22fcf9a4a4176bfb25061e3a01943d3a3eeec3da48ceb4822e8ca944ab729f0 Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.531538 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.536666 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.560304 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.574485 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.591778 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.603904 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.617863 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46a89bb-4594-410c-8da8-6935fa870ea4-host\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.617913 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8xrg\" (UniqueName: \"kubernetes.io/projected/e46a89bb-4594-410c-8da8-6935fa870ea4-kube-api-access-p8xrg\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.617945 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e46a89bb-4594-410c-8da8-6935fa870ea4-serviceca\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.618028 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.618440 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e46a89bb-4594-410c-8da8-6935fa870ea4-host\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.619132 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75645fc5-304c-488e-830a-4564f86f3fae-hosts-file\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.619143 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75645fc5-304c-488e-830a-4564f86f3fae-hosts-file\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.619194 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjj4\" (UniqueName: \"kubernetes.io/projected/75645fc5-304c-488e-830a-4564f86f3fae-kube-api-access-gjjj4\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.618894 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e46a89bb-4594-410c-8da8-6935fa870ea4-serviceca\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.638359 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.640888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjj4\" (UniqueName: \"kubernetes.io/projected/75645fc5-304c-488e-830a-4564f86f3fae-kube-api-access-gjjj4\") pod \"node-resolver-pf2vl\" (UID: \"75645fc5-304c-488e-830a-4564f86f3fae\") " pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.641994 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8xrg\" (UniqueName: \"kubernetes.io/projected/e46a89bb-4594-410c-8da8-6935fa870ea4-kube-api-access-p8xrg\") pod \"node-ca-82v75\" (UID: \"e46a89bb-4594-410c-8da8-6935fa870ea4\") " pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.650060 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.669341 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.677139 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.685501 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.695238 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.706620 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.717530 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 07:22:26 +0000 UTC, rotation deadline is 2026-11-28 11:20:41.204562079 +0000 UTC Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.717648 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7179h53m13.486917097s for next certificate rotation Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.792848 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pf2vl" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.798916 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-82v75" Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.801255 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75645fc5_304c_488e_830a_4564f86f3fae.slice/crio-e20b72d852c5870be649abc1a06ce27e3ff06823f398f05a176dd676d98f2175 WatchSource:0}: Error finding container e20b72d852c5870be649abc1a06ce27e3ff06823f398f05a176dd676d98f2175: Status 404 returned error can't find the container with id e20b72d852c5870be649abc1a06ce27e3ff06823f398f05a176dd676d98f2175 Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.821049 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.821213 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821238 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:28.821204333 +0000 UTC m=+22.242407831 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.821292 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.821353 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821354 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: I0202 07:27:27.821393 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821448 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:28.821429119 +0000 UTC m=+22.242632467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821467 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821552 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821585 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821600 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821559 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:28.821542172 +0000 UTC m=+22.242745520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821563 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821715 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:28.821689876 +0000 UTC m=+22.242893364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821857 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.821902 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: E0202 07:27:27.822027 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:28.821993824 +0000 UTC m=+22.243197352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:27 crc kubenswrapper[4730]: W0202 07:27:27.828649 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode46a89bb_4594_410c_8da8_6935fa870ea4.slice/crio-7fa6d057aa6617073b50b91c3cb11860de93bdbdc618c66181811f2838fd44f4 WatchSource:0}: Error finding container 7fa6d057aa6617073b50b91c3cb11860de93bdbdc618c66181811f2838fd44f4: Status 404 returned error can't find the container with id 7fa6d057aa6617073b50b91c3cb11860de93bdbdc618c66181811f2838fd44f4 Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.244945 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:03:47.444673357 +0000 UTC Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.266039 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ghs2t"] Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.266472 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.266929 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z7nht"] Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.267467 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zp8tp"] Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.267667 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.267687 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.270993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.273859 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.274001 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.274029 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.274033 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.274785 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.274793 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.275367 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.275780 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54z89"] Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.276562 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.276987 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.277202 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.277224 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.277277 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.281003 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.281452 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.281596 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.283429 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.283934 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.284951 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.285513 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.290996 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.302329 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.312034 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.320720 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327355 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327384 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-netns\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327398 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-multus\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327413 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-kubelet\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327432 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxn9\" (UniqueName: \"kubernetes.io/projected/00b75ed7-302d-4f21-9c20-6ecab241b7b4-kube-api-access-bbxn9\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327448 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61cde55f-e8c2-493e-82b6-a3b4a839366b-proxy-tls\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327464 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-etc-kubernetes\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327477 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327506 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-bin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327534 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-conf-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327560 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327573 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327586 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-daemon-config\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327600 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61cde55f-e8c2-493e-82b6-a3b4a839366b-mcd-auth-proxy-config\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327615 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztbk\" (UniqueName: \"kubernetes.io/projected/61cde55f-e8c2-493e-82b6-a3b4a839366b-kube-api-access-dztbk\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-os-release\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cni-binary-copy\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327660 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327675 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqdp\" (UniqueName: \"kubernetes.io/projected/736a9d82-2671-4b6b-a9f2-2488de13b521-kube-api-access-sqqdp\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327704 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-os-release\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327718 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327740 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-k8s-cni-cncf-io\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327753 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-cnibin\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327766 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327778 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327816 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-multus-certs\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327831 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327854 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-socket-dir-parent\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327883 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327896 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327908 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327924 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327945 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-system-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327960 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.327976 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbxs\" (UniqueName: \"kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328098 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-hostroot\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328286 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/61cde55f-e8c2-493e-82b6-a3b4a839366b-rootfs\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328357 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328438 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328468 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cnibin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328484 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328501 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.328517 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.331935 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.342646 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.351524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.358547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.366819 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.374779 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pf2vl" event={"ID":"75645fc5-304c-488e-830a-4564f86f3fae","Type":"ContainerStarted","Data":"af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.374837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pf2vl" event={"ID":"75645fc5-304c-488e-830a-4564f86f3fae","Type":"ContainerStarted","Data":"e20b72d852c5870be649abc1a06ce27e3ff06823f398f05a176dd676d98f2175"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.375623 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.376684 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.376715 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.376729 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b22fcf9a4a4176bfb25061e3a01943d3a3eeec3da48ceb4822e8ca944ab729f0"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.378131 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.378201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"13e2ba7c1fc9cf114e0c2728d1e9937f9e939bd17450a5f90ca9c31aee51bf7a"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.379075 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1fe37f6bec5cf75be98534d51df7babdff8f5d88686cfa4168af072e9a8d9e79"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.380877 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.382310 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.382735 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.383776 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-82v75" event={"ID":"e46a89bb-4594-410c-8da8-6935fa870ea4","Type":"ContainerStarted","Data":"821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.383809 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-82v75" event={"ID":"e46a89bb-4594-410c-8da8-6935fa870ea4","Type":"ContainerStarted","Data":"7fa6d057aa6617073b50b91c3cb11860de93bdbdc618c66181811f2838fd44f4"} Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.386063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.386510 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.402618 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.412879 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.422506 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.429925 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-hostroot\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.429965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/61cde55f-e8c2-493e-82b6-a3b4a839366b-rootfs\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.429985 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430025 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cnibin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-hostroot\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430059 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-netns\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-multus\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-kubelet\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430226 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430247 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbxn9\" (UniqueName: \"kubernetes.io/projected/00b75ed7-302d-4f21-9c20-6ecab241b7b4-kube-api-access-bbxn9\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430265 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-kubelet\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430250 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-netns\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430080 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/61cde55f-e8c2-493e-82b6-a3b4a839366b-rootfs\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430226 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cnibin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430272 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61cde55f-e8c2-493e-82b6-a3b4a839366b-proxy-tls\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430291 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-multus\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430393 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-etc-kubernetes\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430410 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430418 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430444 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-etc-kubernetes\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-bin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-conf-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430513 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430542 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-var-lib-cni-bin\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430601 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-conf-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430631 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430646 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430663 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430676 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430687 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-daemon-config\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430715 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61cde55f-e8c2-493e-82b6-a3b4a839366b-mcd-auth-proxy-config\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430731 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztbk\" (UniqueName: \"kubernetes.io/projected/61cde55f-e8c2-493e-82b6-a3b4a839366b-kube-api-access-dztbk\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430756 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-os-release\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cni-binary-copy\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430790 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqqdp\" (UniqueName: \"kubernetes.io/projected/736a9d82-2671-4b6b-a9f2-2488de13b521-kube-api-access-sqqdp\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430865 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-os-release\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-k8s-cni-cncf-io\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-cnibin\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.430944 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431001 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431034 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-multus-certs\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431093 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431127 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431174 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-socket-dir-parent\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431204 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431272 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-system-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431303 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431330 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbxs\" (UniqueName: \"kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431340 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-system-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431351 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-os-release\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431368 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431064 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431221 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-multus-certs\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-cni-dir\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431595 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-cnibin\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431617 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-socket-dir-parent\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431711 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-os-release\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431732 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431753 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/736a9d82-2671-4b6b-a9f2-2488de13b521-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431766 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431779 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.431789 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/00b75ed7-302d-4f21-9c20-6ecab241b7b4-host-run-k8s-cni-cncf-io\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.432396 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-cni-binary-copy\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433080 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61cde55f-e8c2-493e-82b6-a3b4a839366b-mcd-auth-proxy-config\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433584 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/00b75ed7-302d-4f21-9c20-6ecab241b7b4-multus-daemon-config\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433897 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.433912 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.434300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/736a9d82-2671-4b6b-a9f2-2488de13b521-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.434606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.434887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.440594 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61cde55f-e8c2-493e-82b6-a3b4a839366b-proxy-tls\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.447035 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.450713 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqqdp\" (UniqueName: \"kubernetes.io/projected/736a9d82-2671-4b6b-a9f2-2488de13b521-kube-api-access-sqqdp\") pod \"multus-additional-cni-plugins-z7nht\" (UID: \"736a9d82-2671-4b6b-a9f2-2488de13b521\") " pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.452691 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztbk\" (UniqueName: \"kubernetes.io/projected/61cde55f-e8c2-493e-82b6-a3b4a839366b-kube-api-access-dztbk\") pod \"machine-config-daemon-ghs2t\" (UID: \"61cde55f-e8c2-493e-82b6-a3b4a839366b\") " pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.453036 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbxs\" (UniqueName: \"kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs\") pod \"ovnkube-node-54z89\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.460403 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.463589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbxn9\" (UniqueName: \"kubernetes.io/projected/00b75ed7-302d-4f21-9c20-6ecab241b7b4-kube-api-access-bbxn9\") pod \"multus-zp8tp\" (UID: \"00b75ed7-302d-4f21-9c20-6ecab241b7b4\") " pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.467784 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.474646 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.482296 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.494357 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.502640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.512094 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.526942 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.544635 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.556446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.565838 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.577421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.581145 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.588842 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.590772 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nht" Feb 02 07:27:28 crc kubenswrapper[4730]: W0202 07:27:28.591214 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cde55f_e8c2_493e_82b6_a3b4a839366b.slice/crio-100e928a0a3bbd4ae82282129673a3bce1e1c8071bf7ffe1750f6a522349b8e2 WatchSource:0}: Error finding container 100e928a0a3bbd4ae82282129673a3bce1e1c8071bf7ffe1750f6a522349b8e2: Status 404 returned error can't find the container with id 100e928a0a3bbd4ae82282129673a3bce1e1c8071bf7ffe1750f6a522349b8e2 Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.598230 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zp8tp" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.604743 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:28 crc kubenswrapper[4730]: W0202 07:27:28.611880 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736a9d82_2671_4b6b_a9f2_2488de13b521.slice/crio-5094b201ec348e3f7159b5f9135aa3aa7754302ae61a2f0e105a5251c558987b WatchSource:0}: Error finding container 5094b201ec348e3f7159b5f9135aa3aa7754302ae61a2f0e105a5251c558987b: Status 404 returned error can't find the container with id 5094b201ec348e3f7159b5f9135aa3aa7754302ae61a2f0e105a5251c558987b Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.613802 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.626387 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: W0202 07:27:28.627490 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba7d1b84_4596_463a_bc77_c365c3c969b0.slice/crio-fcb185b3ef8f66d93abcb2be561742c7daf41a118df92c2def7f3d16e62b87a9 WatchSource:0}: Error finding container fcb185b3ef8f66d93abcb2be561742c7daf41a118df92c2def7f3d16e62b87a9: Status 404 returned error can't find the container with id fcb185b3ef8f66d93abcb2be561742c7daf41a118df92c2def7f3d16e62b87a9 Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.638521 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.668728 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.702898 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.747114 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.781299 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.836091 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836240 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:30.836223979 +0000 UTC m=+24.257427327 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.836279 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.836302 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.836323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:28 crc kubenswrapper[4730]: I0202 07:27:28.836340 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836406 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836444 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836457 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836476 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836484 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:30.836464775 +0000 UTC m=+24.257668123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836504 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:30.836497876 +0000 UTC m=+24.257701224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836541 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836551 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836558 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836560 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836578 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:30.836572368 +0000 UTC m=+24.257775716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:28 crc kubenswrapper[4730]: E0202 07:27:28.836592 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:30.836583948 +0000 UTC m=+24.257787436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.246130 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:40:04.017339716 +0000 UTC Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.252514 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.252541 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.252539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:29 crc kubenswrapper[4730]: E0202 07:27:29.252659 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:29 crc kubenswrapper[4730]: E0202 07:27:29.252783 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:29 crc kubenswrapper[4730]: E0202 07:27:29.252874 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.256257 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.257079 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.258344 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.259238 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.259836 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.261038 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.388043 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" exitCode=0 Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.388143 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.388230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"fcb185b3ef8f66d93abcb2be561742c7daf41a118df92c2def7f3d16e62b87a9"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.389443 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerStarted","Data":"2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.389485 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerStarted","Data":"fa3b4030e5e83cb8b7bbaccb2b92eaa50276ed85abafbd369f255db054f51f62"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.390797 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.390830 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.390843 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"100e928a0a3bbd4ae82282129673a3bce1e1c8071bf7ffe1750f6a522349b8e2"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.392294 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7" exitCode=0 Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.392805 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.392847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerStarted","Data":"5094b201ec348e3f7159b5f9135aa3aa7754302ae61a2f0e105a5251c558987b"} Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.422960 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.441706 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.467790 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.483938 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.494893 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.507171 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.520381 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.535468 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.548018 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.558601 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.571022 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.581684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.600532 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.613892 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.628387 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.639102 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.647624 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.657649 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.668098 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.680787 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.692512 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.704342 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.730072 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.743069 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.786640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:29 crc kubenswrapper[4730]: I0202 07:27:29.824070 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.246841 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:53:04.425065477 +0000 UTC Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398520 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398942 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.398988 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.400475 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895" exitCode=0 Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.400525 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.401746 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce"} Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.412980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.422935 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.437191 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.451265 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.468577 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.483254 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.499028 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.515339 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.534115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.545356 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.557288 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.571096 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.582436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.594627 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.605565 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.615287 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.625281 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.638548 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.649350 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.660573 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.674556 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.700827 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.745475 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.779638 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.826597 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.859478 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.859601 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859721 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859734 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:34.85969582 +0000 UTC m=+28.280899198 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859781 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:34.859762922 +0000 UTC m=+28.280966280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.859636 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.859835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.859897 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859919 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859957 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.859982 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860002 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860018 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:34.859992768 +0000 UTC m=+28.281196156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860044 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860088 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860102 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860054 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:34.860039129 +0000 UTC m=+28.281242507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:30 crc kubenswrapper[4730]: E0202 07:27:30.860151 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:34.860141102 +0000 UTC m=+28.281344470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.873861 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:30Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:30 crc kubenswrapper[4730]: I0202 07:27:30.998897 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.002138 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.006473 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.011763 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.025677 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.038032 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.054931 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.081753 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.125709 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.161206 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.208658 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.241055 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.248358 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:59:38.980453066 +0000 UTC Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.252816 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.252913 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.252940 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:31 crc kubenswrapper[4730]: E0202 07:27:31.253105 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:31 crc kubenswrapper[4730]: E0202 07:27:31.253265 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:31 crc kubenswrapper[4730]: E0202 07:27:31.253395 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.281539 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.320148 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.365832 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.403237 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.407184 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef" exitCode=0 Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.407277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef"} Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.442901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.484093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.521199 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.566336 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.599445 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.639202 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.682182 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.724439 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.761230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.799285 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.842262 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.883189 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.919057 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:31 crc kubenswrapper[4730]: I0202 07:27:31.967616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:31Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.002463 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.040312 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.080051 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.139121 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.161991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.206510 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.245117 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.249224 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:50:00.868413021 +0000 UTC Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.285507 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.328084 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.362684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.406942 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.412991 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2" exitCode=0 Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.413045 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2"} Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.448300 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.491817 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.521322 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.559845 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.609368 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.645134 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.686815 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.728341 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.765107 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.802785 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.843591 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.881315 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.923989 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:32 crc kubenswrapper[4730]: I0202 07:27:32.965688 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:32Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.002506 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.046147 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.082904 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.085550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.085582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.085592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.085721 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.086893 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.114132 4730 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.114504 4730 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.116413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.116458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.116474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.116500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.116518 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.137509 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.143132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.143228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.143246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.143271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.143288 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.160697 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.165965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.166220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.166431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.166618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.166789 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.186870 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.193046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.193110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.193120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.193138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.193152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.211039 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.215180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.215227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.215237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.215254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.215265 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.234069 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.234218 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.235911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.235945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.235956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.235971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.235983 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.250253 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:09:27.226532717 +0000 UTC Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.252575 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.252580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.252724 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.252818 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.252907 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:33 crc kubenswrapper[4730]: E0202 07:27:33.253102 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.337795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.337833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.337844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.337859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.337870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.418448 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.420927 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318" exitCode=0 Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.421006 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.422355 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.435844 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.439832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.439859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.439867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.439880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.439890 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.449817 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.461223 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.473843 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.487290 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.499671 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.512143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.529471 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.539764 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.543285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.543328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.543345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.543365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.543377 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.556361 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.579684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.620687 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.645144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.645208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.645220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.645238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.645250 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.661148 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.707081 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:33Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.748488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.748530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.748539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.748583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.748598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.850947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.851007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.851026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.851049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.851066 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.953639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.953701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.953719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.953744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.953761 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:33Z","lastTransitionTime":"2026-02-02T07:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:33 crc kubenswrapper[4730]: I0202 07:27:33.968491 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.057124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.057216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.057241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.057270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.057291 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.159712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.159757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.159767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.159782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.159791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.251145 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:05:39.989956389 +0000 UTC Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.262654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.262711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.262728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.262752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.262770 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.366283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.366343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.366361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.366384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.366407 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.428937 4730 generic.go:334] "Generic (PLEG): container finished" podID="736a9d82-2671-4b6b-a9f2-2488de13b521" containerID="120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d" exitCode=0 Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.428997 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerDied","Data":"120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.449495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.467660 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.469905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.469943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.469953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.469969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.469979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.482908 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.498063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.510153 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.529612 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.548429 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.566779 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.572093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.572137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.572154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.572203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.572220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.590183 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.604501 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.620328 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.633792 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.645279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.659474 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:34Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.675457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.675489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.675500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.675556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.675570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.777775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.777838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.777858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.777883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.777903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.883319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.883362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.883373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.883388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.883398 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.900756 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.900966 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.900939864 +0000 UTC m=+36.322143222 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.901075 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.901122 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.901209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.901254 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901279 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901323 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901384 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.901351775 +0000 UTC m=+36.322555163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901394 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901419 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901421 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.901403837 +0000 UTC m=+36.322607275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901436 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901484 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.901470708 +0000 UTC m=+36.322674146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901386 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901516 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901528 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:34 crc kubenswrapper[4730]: E0202 07:27:34.901567 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.901554711 +0000 UTC m=+36.322758139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.986179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.986233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.986247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.986265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:34 crc kubenswrapper[4730]: I0202 07:27:34.986285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:34Z","lastTransitionTime":"2026-02-02T07:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.088825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.088881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.088901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.088925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.088941 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.191867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.191931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.191947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.191970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.191988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.251406 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:36:01.658648393 +0000 UTC Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.252813 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.252851 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:35 crc kubenswrapper[4730]: E0202 07:27:35.252946 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:35 crc kubenswrapper[4730]: E0202 07:27:35.253042 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.253114 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:35 crc kubenswrapper[4730]: E0202 07:27:35.253315 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.294405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.294435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.294446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.294460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.294470 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.396797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.396865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.396889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.396919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.396991 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.435970 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" event={"ID":"736a9d82-2671-4b6b-a9f2-2488de13b521","Type":"ContainerStarted","Data":"bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.441144 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.441601 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.441810 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.452428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.470549 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.509072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.509108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.509124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.509141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.509155 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.513273 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.514158 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.522175 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.537534 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.550241 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.559684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.570575 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.583857 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.601625 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.611819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.611869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.611886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.611910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.611927 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.616376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.629579 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.641850 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.654122 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.675986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.691015 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.708393 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.713740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.713779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.713790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.713806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.713820 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.725397 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.737863 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.750603 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.761692 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.773416 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.787885 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.804382 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.816331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.816364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.816373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.816387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.816397 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.822285 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.835948 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.849260 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.858356 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.882698 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.918376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.918409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.918419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.918432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:35 crc kubenswrapper[4730]: I0202 07:27:35.918441 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:35Z","lastTransitionTime":"2026-02-02T07:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.020648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.020718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.020742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.020765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.020781 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.122801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.122837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.122845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.122860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.122869 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.225465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.225506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.225516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.225529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.225537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.251877 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:02:05.010681729 +0000 UTC Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.328365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.328421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.328437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.328460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.328479 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.430887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.430947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.430965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.430989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.431043 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.444227 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.533116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.533198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.533220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.533244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.533261 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.636240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.636310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.636336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.636365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.636386 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.738980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.739193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.739202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.739214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.739222 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.841292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.841334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.841343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.841358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.841368 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.943502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.943552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.943562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.943578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:36 crc kubenswrapper[4730]: I0202 07:27:36.943590 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:36Z","lastTransitionTime":"2026-02-02T07:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.046143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.046240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.046259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.046282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.046299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.148766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.148807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.148817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.148844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.148857 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.250845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.250886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.250898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.250914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.250924 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.252012 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:46:53.252676854 +0000 UTC Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.252045 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.252095 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:37 crc kubenswrapper[4730]: E0202 07:27:37.252180 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.252225 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:37 crc kubenswrapper[4730]: E0202 07:27:37.252316 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:37 crc kubenswrapper[4730]: E0202 07:27:37.252403 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.266834 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.281232 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.292280 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.306345 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.319795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.333065 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.350421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.352479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.352522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.352536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.352552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.352564 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.364866 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.379963 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.393906 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.410559 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.421504 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.434076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.446239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.447862 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.455037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.455080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.455097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.455120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.455139 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.557109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.557351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.557499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.557646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.557778 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.660861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.660901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.660910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.660926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.660937 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.763859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.764097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.764189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.764273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.764343 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.797007 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.866891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.867100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.867203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.867285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.867360 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.970215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.970268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.970286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.970308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:37 crc kubenswrapper[4730]: I0202 07:27:37.970325 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:37Z","lastTransitionTime":"2026-02-02T07:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.073356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.073413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.073431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.073456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.073472 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.176008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.176056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.176072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.176094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.176110 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.252327 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:34:17.171501259 +0000 UTC Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.278727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.278779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.278790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.278810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.278822 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.381217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.381279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.381296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.381320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.381339 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.417854 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.430245 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.442488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.452137 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/0.log" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.455422 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67" exitCode=1 Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.455473 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.456463 4730 scope.go:117] "RemoveContainer" containerID="8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.457026 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.471547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.486332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.486385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.486402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.486427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.486444 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.493609 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.508901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.522046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.536378 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.548827 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.568891 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.584838 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.589301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.589343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.589354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.589370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.589382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.605288 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.619300 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.635796 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.650699 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.665903 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.677988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.687219 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.691577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.691637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.691664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.691693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.691716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.696696 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.707832 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.719671 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.731473 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.744623 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.755288 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.775353 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.794606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.805446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.818839 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:38Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.896901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.896978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.897000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.897033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.897054 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.999713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.999742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.999750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.999763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:38 crc kubenswrapper[4730]: I0202 07:27:38.999788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:38Z","lastTransitionTime":"2026-02-02T07:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.101926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.101966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.101977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.101993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.102006 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.204328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.204361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.204370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.204383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.204393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.252944 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.252951 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:37:37.194834495 +0000 UTC Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.253020 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.252963 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:39 crc kubenswrapper[4730]: E0202 07:27:39.253092 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:39 crc kubenswrapper[4730]: E0202 07:27:39.253195 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:39 crc kubenswrapper[4730]: E0202 07:27:39.253279 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.306352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.306396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.306406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.306427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.306441 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.409075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.409116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.409128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.409143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.409157 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.459969 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/0.log" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.465451 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.465572 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.479913 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.506855 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.511413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.511448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.511457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.511471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.511483 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.533266 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.548581 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.568733 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.578896 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.590228 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.600137 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.612477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.613876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.613938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.613961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.613986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.614002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.628973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.641378 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.652922 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.662148 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.682777 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:39Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.716615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.716657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.716669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.716686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.716697 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.818476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.818529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.818541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.818558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.818570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.920603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.920648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.920656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.920670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:39 crc kubenswrapper[4730]: I0202 07:27:39.920679 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:39Z","lastTransitionTime":"2026-02-02T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.023422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.023461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.023471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.023487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.023496 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.126179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.126218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.126227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.126242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.126255 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.228679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.228721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.228733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.228749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.228760 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.253156 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:05:02.396635594 +0000 UTC Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.331211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.331258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.331268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.331288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.331299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.434079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.434121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.434132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.434147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.434179 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.471500 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/1.log" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.471977 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/0.log" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.474674 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3" exitCode=1 Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.474703 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.474748 4730 scope.go:117] "RemoveContainer" containerID="8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.475752 4730 scope.go:117] "RemoveContainer" containerID="6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3" Feb 02 07:27:40 crc kubenswrapper[4730]: E0202 07:27:40.475999 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.494209 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.512624 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.524041 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.536899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.536945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.536962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.536985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.537002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.543957 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.556234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.567927 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.581776 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.595049 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.605978 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.617506 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.627957 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.638649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.638679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.638687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.638701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.638714 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.641077 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.653075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.668487 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:40Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.741245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.741301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.741312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.741331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.741343 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.843442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.843491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.843503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.843522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.843533 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.945987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.946036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.946046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.946061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:40 crc kubenswrapper[4730]: I0202 07:27:40.946072 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:40Z","lastTransitionTime":"2026-02-02T07:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.048803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.048852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.048863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.048877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.048888 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.061743 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz"] Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.062208 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.064246 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.065887 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.074718 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.074780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.074869 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.074930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dtn\" (UniqueName: \"kubernetes.io/projected/590f0b91-13e8-4a5b-9422-7ea0707b10d5-kube-api-access-b4dtn\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.084908 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.102909 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.121584 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.136788 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.151430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.151455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.151466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.151483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.151495 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.153224 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.173024 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.175606 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.175695 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dtn\" (UniqueName: \"kubernetes.io/projected/590f0b91-13e8-4a5b-9422-7ea0707b10d5-kube-api-access-b4dtn\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.175760 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.175796 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.176616 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.176807 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.184214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/590f0b91-13e8-4a5b-9422-7ea0707b10d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.191657 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.193990 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dtn\" (UniqueName: \"kubernetes.io/projected/590f0b91-13e8-4a5b-9422-7ea0707b10d5-kube-api-access-b4dtn\") pod \"ovnkube-control-plane-749d76644c-l7ljz\" (UID: \"590f0b91-13e8-4a5b-9422-7ea0707b10d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.212377 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.231031 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.248722 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.252232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.252280 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.252332 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.252492 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.252707 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.252878 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.253525 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:21:50.509489189 +0000 UTC Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.254063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.254101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.254116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.254138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.254154 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.264196 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.278341 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.308212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.323272 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.342408 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.356043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.356079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.356088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.356106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.356118 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.381821 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" Feb 02 07:27:41 crc kubenswrapper[4730]: W0202 07:27:41.402740 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod590f0b91_13e8_4a5b_9422_7ea0707b10d5.slice/crio-1f3670a1d80fc19d1c13aea8f04e8a56a4c9129778d37acfc40438df1ecd9a3d WatchSource:0}: Error finding container 1f3670a1d80fc19d1c13aea8f04e8a56a4c9129778d37acfc40438df1ecd9a3d: Status 404 returned error can't find the container with id 1f3670a1d80fc19d1c13aea8f04e8a56a4c9129778d37acfc40438df1ecd9a3d Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.458471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.458513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.458524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.458538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.458548 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.483241 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/1.log" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.489869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" event={"ID":"590f0b91-13e8-4a5b-9422-7ea0707b10d5","Type":"ContainerStarted","Data":"1f3670a1d80fc19d1c13aea8f04e8a56a4c9129778d37acfc40438df1ecd9a3d"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.561019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.561047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.561060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.561099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.561111 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.663745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.663783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.663794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.663809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.663819 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.767724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.767775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.767792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.767815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.767833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.822283 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xrjth"] Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.822901 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.822986 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.837064 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.853386 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.870063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.870102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.870112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.870129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.870142 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.871552 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.883840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k2r\" (UniqueName: \"kubernetes.io/projected/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-kube-api-access-98k2r\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.883916 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.898606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.918150 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.935308 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.947044 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.964132 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.972013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.972056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.972067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.972081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.972091 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:41Z","lastTransitionTime":"2026-02-02T07:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.979043 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.984803 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k2r\" (UniqueName: \"kubernetes.io/projected/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-kube-api-access-98k2r\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.984859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.984951 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:41 crc kubenswrapper[4730]: E0202 07:27:41.985001 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:27:42.484989105 +0000 UTC m=+35.906192453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:41 crc kubenswrapper[4730]: I0202 07:27:41.991590 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:41Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.005098 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.009155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k2r\" (UniqueName: \"kubernetes.io/projected/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-kube-api-access-98k2r\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.019698 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.036205 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.046974 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.062795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.074895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.074951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.074964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.074984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.074997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.078642 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.177440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.177506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.177523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.177551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.177573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.253941 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:43:55.142145478 +0000 UTC Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.281764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.282157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.282202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.282244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.282263 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.386408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.386478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.386497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.386523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.386544 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.489558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.489791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.489811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.489839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.489857 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.491800 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.491948 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.492047 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:27:43.492020559 +0000 UTC m=+36.913223947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.495379 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" event={"ID":"590f0b91-13e8-4a5b-9422-7ea0707b10d5","Type":"ContainerStarted","Data":"559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.495436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" event={"ID":"590f0b91-13e8-4a5b-9422-7ea0707b10d5","Type":"ContainerStarted","Data":"567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.510615 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.521393 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.540822 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.553717 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.565752 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.581497 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.594239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.594288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.594306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.594330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.594347 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.596576 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.612579 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.625113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.638387 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.652786 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.670994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.685456 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.697367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.697432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.697450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.697475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.697492 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.703368 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.719770 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.733422 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:42Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.800260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.800325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.800342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.800365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.800443 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.903230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.903287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.903303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.903326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.903342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:42Z","lastTransitionTime":"2026-02-02T07:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.997932 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998100 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:27:58.998065467 +0000 UTC m=+52.419268845 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.998406 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.998498 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998542 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998574 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998593 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.998539 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998650 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:58.998635902 +0000 UTC m=+52.419839280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:42 crc kubenswrapper[4730]: I0202 07:27:42.998750 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998658 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998821 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998850 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998862 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998694 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998891 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:58.998867098 +0000 UTC m=+52.420070476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998918 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:58.998905259 +0000 UTC m=+52.420108637 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:42 crc kubenswrapper[4730]: E0202 07:27:42.998941 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:27:58.9989303 +0000 UTC m=+52.420133688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.005602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.005644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.005656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.005673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.005684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.107765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.107810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.107819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.107834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.107844 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.210446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.210498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.210517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.210540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.210557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.253015 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.253141 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.253196 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.253322 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.253446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.253763 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.254495 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:07:11.216880602 +0000 UTC Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.313594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.313652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.313668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.313695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.313714 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.416374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.416456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.416476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.416505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.416531 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.493035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.493095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.493112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.493135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.493152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.502573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.502736 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.502878 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:27:45.50279832 +0000 UTC m=+38.924001708 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.514330 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:43Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.519032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.519091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.519107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.519134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.519150 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.534376 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:43Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.538855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.538900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.538918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.538939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.538956 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.558316 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:43Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.563581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.563639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.563656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.563681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.563698 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.582755 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:43Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.588478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.588525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.588534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.588548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.588558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.607233 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:43Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:43 crc kubenswrapper[4730]: E0202 07:27:43.607451 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.609410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.609461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.609479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.609502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.609521 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.713355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.713409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.713447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.713470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.713487 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.816148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.816268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.816293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.816325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.816349 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.920120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.920198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.920215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.920239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:43 crc kubenswrapper[4730]: I0202 07:27:43.920259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:43Z","lastTransitionTime":"2026-02-02T07:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.023192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.023225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.023234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.023247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.023256 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.125592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.125638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.125648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.125665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.125675 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.228434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.228481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.228493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.228510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.228522 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.251999 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:44 crc kubenswrapper[4730]: E0202 07:27:44.252123 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.255144 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:27:22.230143447 +0000 UTC Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.331798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.331975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.331999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.332027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.332052 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.434572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.434606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.434614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.434627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.434636 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.537119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.537156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.537184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.537197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.537208 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.639044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.639125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.639144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.639193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.639210 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.741495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.741642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.741667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.741695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.741717 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.844788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.844834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.844845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.844859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.844870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.947020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.947068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.947078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.947092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:44 crc kubenswrapper[4730]: I0202 07:27:44.947102 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:44Z","lastTransitionTime":"2026-02-02T07:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.050258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.050337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.050355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.050379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.050395 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.153305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.153342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.153351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.153367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.153376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.252749 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.252811 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:45 crc kubenswrapper[4730]: E0202 07:27:45.253998 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.254572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:45 crc kubenswrapper[4730]: E0202 07:27:45.255203 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.255472 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:09:57.124970849 +0000 UTC Feb 02 07:27:45 crc kubenswrapper[4730]: E0202 07:27:45.257599 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.258308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.258352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.258374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.258407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.258426 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.363760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.363818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.363836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.363862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.363881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.467703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.467760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.467777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.467807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.467827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.521770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:45 crc kubenswrapper[4730]: E0202 07:27:45.521958 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:45 crc kubenswrapper[4730]: E0202 07:27:45.522047 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:27:49.52202173 +0000 UTC m=+42.943225088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.570769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.570811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.570821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.570836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.570847 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.674443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.674494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.674507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.674524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.674536 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.777584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.777656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.777680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.777707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.777725 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.879806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.879849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.879858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.879874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.879887 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.982711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.982787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.982804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.982828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:45 crc kubenswrapper[4730]: I0202 07:27:45.982845 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:45Z","lastTransitionTime":"2026-02-02T07:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.085065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.085124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.085141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.085192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.085212 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.187658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.187716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.187732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.187759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.187777 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.252054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:46 crc kubenswrapper[4730]: E0202 07:27:46.252276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.256572 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:44:23.914039512 +0000 UTC Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.290216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.290280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.290301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.290334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.290353 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.393623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.393702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.393751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.393781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.393809 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.496814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.496880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.496904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.496932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.496951 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.599521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.599591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.599630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.599659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.599680 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.702238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.702272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.702282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.702299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.702309 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.805642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.805681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.805692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.805706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.805717 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.908219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.908251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.908259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.908272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:46 crc kubenswrapper[4730]: I0202 07:27:46.908281 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:46Z","lastTransitionTime":"2026-02-02T07:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.010784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.010840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.010856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.010881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.010898 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.113746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.114277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.114682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.114698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.114707 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.217671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.217731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.217752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.217780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.217803 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.252560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.252583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:47 crc kubenswrapper[4730]: E0202 07:27:47.252727 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:47 crc kubenswrapper[4730]: E0202 07:27:47.252914 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.252990 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:47 crc kubenswrapper[4730]: E0202 07:27:47.253198 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.256958 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:35:01.725005777 +0000 UTC Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.272078 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.289967 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.310749 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.321357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.321398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.321414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.321435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.321452 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.332866 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.349362 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.381289 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f75e5442c3f2705743f1468b3794c642b08ed5c7ad5b1cd09af3416c1f27a67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:37Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:27:37.664736 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:37.664748 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:37.664777 6045 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:37.664788 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 07:27:37.664822 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:37.664826 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:37.664850 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:37.664867 6045 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:37.664840 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:37.664907 6045 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:37.664928 6045 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:37.664973 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:37.664990 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07:27:37.664992 6045 factory.go:656] Stopping watch factory\\\\nI0202 07:27:37.665015 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.394977 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.412096 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.425010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.425214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.425300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.425372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.425432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.426051 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.449080 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.470277 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.489153 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.504181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.523906 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.528303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.528558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.528675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.528779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.528889 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.540354 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.553155 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.632149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.632233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.632250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.632277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.632295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.735026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.735091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.735112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.735141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.735197 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.837977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.838387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.838581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.838821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.839024 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.942272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.942581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.942731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.942873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:47 crc kubenswrapper[4730]: I0202 07:27:47.943005 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:47Z","lastTransitionTime":"2026-02-02T07:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.045321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.045391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.045410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.045433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.045451 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.147958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.148000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.148009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.148025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.148037 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.249906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.249958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.249975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.249999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.250015 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.252436 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:48 crc kubenswrapper[4730]: E0202 07:27:48.252598 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.257427 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:50:54.569323831 +0000 UTC Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.353553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.353589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.353598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.353610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.353621 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.393329 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.394534 4730 scope.go:117] "RemoveContainer" containerID="6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3" Feb 02 07:27:48 crc kubenswrapper[4730]: E0202 07:27:48.394876 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.414623 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.432722 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.455793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.455819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.455827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.455840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.455850 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.461321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.475967 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.491224 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.513786 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.532126 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.547615 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.558955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.559013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.559035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.559063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.559086 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.562860 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.575430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.592060 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.612762 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.631610 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.653973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.667077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.667128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.667145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.667192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.667210 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.672595 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.685729 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:48Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.770534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.770578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.770588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.770603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.770613 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.873979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.874043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.874064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.874092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.874113 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.977033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.977637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.977833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.977964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:48 crc kubenswrapper[4730]: I0202 07:27:48.978121 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:48Z","lastTransitionTime":"2026-02-02T07:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.081495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.081572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.081596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.081630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.081654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.184289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.184354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.184368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.184387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.184400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.252061 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.252125 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.252140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:49 crc kubenswrapper[4730]: E0202 07:27:49.252247 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:49 crc kubenswrapper[4730]: E0202 07:27:49.252355 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:49 crc kubenswrapper[4730]: E0202 07:27:49.252460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.257579 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:40:25.009304409 +0000 UTC Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.286665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.286699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.286706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.286719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.286727 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.388704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.388748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.388758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.388777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.388792 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.491966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.492397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.492538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.492660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.492812 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.567377 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:49 crc kubenswrapper[4730]: E0202 07:27:49.567538 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:49 crc kubenswrapper[4730]: E0202 07:27:49.567590 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:27:57.567573018 +0000 UTC m=+50.988776366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.595293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.595504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.595674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.595811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.595936 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.699446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.699710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.699832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.699999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.700117 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.803873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.804286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.804458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.804648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.804868 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.908422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.908469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.908479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.908495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:49 crc kubenswrapper[4730]: I0202 07:27:49.908505 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:49Z","lastTransitionTime":"2026-02-02T07:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.011713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.011756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.011766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.011782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.011793 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.114583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.114679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.114698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.114762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.114784 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.217646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.217716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.217738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.217767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.217788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.252283 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:50 crc kubenswrapper[4730]: E0202 07:27:50.252491 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.258328 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:17:29.436148573 +0000 UTC Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.320382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.320649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.320773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.320896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.321122 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.424086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.424646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.424725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.424805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.424877 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.526872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.527257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.527395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.527550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.527682 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.630455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.630517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.630534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.630583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.630602 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.735247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.735316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.735336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.735362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.735404 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.838805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.838859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.838877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.838898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.838915 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.942578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.942638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.942657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.942679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:50 crc kubenswrapper[4730]: I0202 07:27:50.942695 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:50Z","lastTransitionTime":"2026-02-02T07:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.045029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.045089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.045107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.045131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.045148 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.148241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.148297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.148308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.148321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.148330 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.250012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.250087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.250140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.250321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.250345 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.252306 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.252404 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:51 crc kubenswrapper[4730]: E0202 07:27:51.252458 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:51 crc kubenswrapper[4730]: E0202 07:27:51.252623 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.252812 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:51 crc kubenswrapper[4730]: E0202 07:27:51.253049 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.258442 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:35:58.237316437 +0000 UTC Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.352995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.353052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.353069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.353089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.353103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.456078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.456867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.457057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.457224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.457410 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.559659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.559695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.559706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.559720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.559730 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.662117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.662150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.662172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.662188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.662199 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.764129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.764375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.764449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.764525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.764599 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.867756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.868151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.868385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.868614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.868801 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.971462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.971749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.971835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.971921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:51 crc kubenswrapper[4730]: I0202 07:27:51.972011 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:51Z","lastTransitionTime":"2026-02-02T07:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.075723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.076031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.076104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.076205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.076279 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.178834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.178865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.178876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.178892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.178905 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.252064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:52 crc kubenswrapper[4730]: E0202 07:27:52.252243 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.258552 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:47:59.552019734 +0000 UTC Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.281498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.281573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.281585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.281598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.281608 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.383992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.384043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.384057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.384084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.384099 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.486624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.486677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.486696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.486718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.486735 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.590086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.590197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.590403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.590443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.590471 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.692980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.693038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.693055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.693106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.693123 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.795784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.795971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.795993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.796056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.796077 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.898987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.899026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.899042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.899066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:52 crc kubenswrapper[4730]: I0202 07:27:52.899082 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:52Z","lastTransitionTime":"2026-02-02T07:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.003831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.003876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.003884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.003900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.003910 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.106100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.106147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.106158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.106194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.106205 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.208732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.208776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.208786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.208802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.208815 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.253005 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.253050 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.253133 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.253131 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.253320 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.253479 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.259480 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:54:48.00952067 +0000 UTC Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.311353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.311424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.311441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.311464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.311480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.414493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.414555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.414580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.414608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.414629 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.517359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.517397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.517406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.517421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.517430 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.620000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.620037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.620046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.620061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.620071 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.722977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.723027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.723039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.723058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.723070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.825319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.825359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.825368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.825383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.825393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.862606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.862652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.862661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.862678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.862688 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.880152 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:53Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.884391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.884417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.884425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.884437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.884448 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.896333 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:53Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.900348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.900380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.900390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.900405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.900416 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.917620 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:53Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.921530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.921643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.921709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.921783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.921853 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.936748 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:53Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.940540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.940581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.940593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.940612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.940623 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.952649 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:53Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:53 crc kubenswrapper[4730]: E0202 07:27:53.952814 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.954265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.954296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.954306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.954591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:53 crc kubenswrapper[4730]: I0202 07:27:53.954617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:53Z","lastTransitionTime":"2026-02-02T07:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.057600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.057661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.057679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.057703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.057720 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.160206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.160244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.160254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.160267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.160276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.252463 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:54 crc kubenswrapper[4730]: E0202 07:27:54.252660 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.260084 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:47:24.058933503 +0000 UTC Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.263194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.263228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.263236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.263250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.263261 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.286833 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.295551 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.301430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.319551 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.338933 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.352502 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.365858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.365921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.365947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.365976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.365999 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.368428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.380931 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.398448 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.412951 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.425934 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.440101 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.457644 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.468446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.468479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.468496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.468520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.468551 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.486459 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.499323 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.514728 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.527689 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.542336 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:54Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.571205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.571264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.571282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.571305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.571325 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.673927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.673989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.674006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.674033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.674050 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.776438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.776501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.776517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.776539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.776555 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.878840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.878963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.878982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.879007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.879026 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.982042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.982123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.982148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.982208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:54 crc kubenswrapper[4730]: I0202 07:27:54.982243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:54Z","lastTransitionTime":"2026-02-02T07:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.085221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.085270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.085282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.085301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.085312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.188885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.188969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.189002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.189032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.189053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.252861 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.252904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:55 crc kubenswrapper[4730]: E0202 07:27:55.253061 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.253086 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:55 crc kubenswrapper[4730]: E0202 07:27:55.253320 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:55 crc kubenswrapper[4730]: E0202 07:27:55.253545 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.260317 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:38:03.51034846 +0000 UTC Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.313501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.313542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.313551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.313566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.313577 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.420625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.420898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.421070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.421234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.421423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.523465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.523850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.523988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.524108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.524259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.627370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.628532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.628585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.628614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.628633 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.731770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.731830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.731851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.731882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.731903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.833940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.834000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.834019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.834044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.834062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.935722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.935763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.935773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.935787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:55 crc kubenswrapper[4730]: I0202 07:27:55.935798 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:55Z","lastTransitionTime":"2026-02-02T07:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.038104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.038183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.038201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.038227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.038251 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.140932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.140985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.141002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.141027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.141044 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.250330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.250396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.250421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.250451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.250475 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.252890 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:56 crc kubenswrapper[4730]: E0202 07:27:56.253066 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.261230 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:33:05.163856516 +0000 UTC Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.353584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.353658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.353682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.353715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.353736 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.456970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.457027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.457045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.457069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.457086 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.559524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.559602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.559624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.559652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.559674 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.662908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.662964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.662975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.662992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.663003 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.766006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.766079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.766099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.766124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.766143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.869085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.869132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.869147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.869189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.869203 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.972056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.972120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.972138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.972194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:56 crc kubenswrapper[4730]: I0202 07:27:56.972217 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:56Z","lastTransitionTime":"2026-02-02T07:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.074591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.074630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.074639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.074652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.074666 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.177356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.177413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.177430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.177453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.177471 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.252095 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.252104 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.252197 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:57 crc kubenswrapper[4730]: E0202 07:27:57.252235 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:57 crc kubenswrapper[4730]: E0202 07:27:57.252408 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:57 crc kubenswrapper[4730]: E0202 07:27:57.252517 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.262219 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:23:17.786785683 +0000 UTC Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.272878 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.280816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.280840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.280849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.280862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.280871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.292023 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.310739 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.329767 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.355959 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.374524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.383425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.383485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.383507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.383536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.383558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.396727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.411447 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.429899 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.446145 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.468689 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.487433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.487481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.487499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.487521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.487538 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.491398 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.509925 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.525616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.542526 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.560411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.576591 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:57Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.590402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.590424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.590432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.590446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.590455 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.652497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:57 crc kubenswrapper[4730]: E0202 07:27:57.652731 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:57 crc kubenswrapper[4730]: E0202 07:27:57.652834 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:28:13.652804069 +0000 UTC m=+67.074007467 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.692718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.692771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.692790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.692812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.692831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.794986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.795028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.795040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.795058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.795069 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.898121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.898300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.898319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.898341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:57 crc kubenswrapper[4730]: I0202 07:27:57.898359 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:57Z","lastTransitionTime":"2026-02-02T07:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.001146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.001207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.001219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.001233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.001243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.104329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.104364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.104373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.104386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.104396 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.206397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.206438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.206446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.206460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.206471 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.252549 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:27:58 crc kubenswrapper[4730]: E0202 07:27:58.252737 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.262633 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:06:55.639657933 +0000 UTC Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.309791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.309836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.309852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.309873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.309888 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.416556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.416591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.416600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.416614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.416625 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.519238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.519297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.519307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.519321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.519329 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.622122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.622179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.622189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.622202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.622211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.725199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.725270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.725284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.725302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.725314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.828066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.828269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.828347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.828415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.828490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.931397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.931564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.931599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.931633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:58 crc kubenswrapper[4730]: I0202 07:27:58.931655 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:58Z","lastTransitionTime":"2026-02-02T07:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.034292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.034367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.034395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.034423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.034441 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.068282 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.068412 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068526 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:28:31.068492582 +0000 UTC m=+84.489696000 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068582 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068609 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068627 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.068633 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.068681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.068726 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068924 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068947 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.068965 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069018 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:28:31.069003025 +0000 UTC m=+84.490206403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069098 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069239 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:28:31.069220491 +0000 UTC m=+84.490423949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069310 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:28:31.069295133 +0000 UTC m=+84.490498521 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069329 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.069372 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:28:31.069363035 +0000 UTC m=+84.490566383 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.137301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.137371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.137393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.137422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.137458 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.239831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.239868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.239878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.239893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.239903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.252560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.252663 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.253239 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.253278 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.253329 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:27:59 crc kubenswrapper[4730]: E0202 07:27:59.253541 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.253752 4730 scope.go:117] "RemoveContainer" containerID="6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.262756 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:00:12.321297763 +0000 UTC Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.342578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.342836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.342854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.342878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.342894 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.444991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.445032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.445046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.445068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.445086 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.547708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.547738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.547749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.547764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.547775 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.554445 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/1.log" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.558089 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.559025 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.574520 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.592045 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.615595 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.627303 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.643426 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.649912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.649960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.649969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.649986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.649998 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.666063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.684275 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.702543 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.720685 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.735733 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.752125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.752409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.752540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.752706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.752855 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.753479 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.764907 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.783061 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.793554 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.804944 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.817288 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.835743 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:27:59Z is after 2025-08-24T17:21:41Z" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.856104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.856184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.856203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.856226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.856271 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.958458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.958752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.958859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.959000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:27:59 crc kubenswrapper[4730]: I0202 07:27:59.959104 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:27:59Z","lastTransitionTime":"2026-02-02T07:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.062370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.062720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.062871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.063035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.063298 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.166461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.166517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.166534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.166556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.166574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.252773 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:00 crc kubenswrapper[4730]: E0202 07:28:00.252925 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.262906 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:38:32.177152395 +0000 UTC Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.268606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.268656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.268675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.268699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.268715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.371285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.371339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.371357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.371380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.371397 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.473977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.474316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.474453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.474578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.474689 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.564696 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/2.log" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.565875 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/1.log" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.568682 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" exitCode=1 Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.568735 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.568786 4730 scope.go:117] "RemoveContainer" containerID="6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.569774 4730 scope.go:117] "RemoveContainer" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" Feb 02 07:28:00 crc kubenswrapper[4730]: E0202 07:28:00.570065 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.582174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.582256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.582271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.582303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.582319 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.589276 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.604978 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.617744 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.631627 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.644883 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.654455 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.672886 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee1571a4305d7503490b153d60bde7072656d59da140b7a7a6ee02dc9e135f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:27:39Z\\\",\\\"message\\\":\\\"0202 07:27:39.434211 6174 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 07:27:39.434224 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 07:27:39.434228 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 07:27:39.434237 6174 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:27:39.434249 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 07:27:39.434268 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 07:27:39.434257 6174 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:27:39.434288 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 07:27:39.434256 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 07:27:39.434313 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 07:27:39.434334 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 07:27:39.434343 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 07:27:39.434380 6174 factory.go:656] Stopping watch factory\\\\nI0202 07:27:39.434400 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:27:39.434413 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 07:27:39.434421 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.685207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.685247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.685257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.685274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.685285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.686255 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.699872 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.712509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.730619 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.749343 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.769787 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789473 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.789818 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.805620 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.822849 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.842625 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:00Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.892534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.892589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.892605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.892627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.892644 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.995140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.995250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.995270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.995295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:00 crc kubenswrapper[4730]: I0202 07:28:00.995312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:00Z","lastTransitionTime":"2026-02-02T07:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.098041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.098076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.098087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.098100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.098109 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.200959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.201005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.201014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.201029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.201038 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.252674 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.252691 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:01 crc kubenswrapper[4730]: E0202 07:28:01.252887 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.252704 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:01 crc kubenswrapper[4730]: E0202 07:28:01.252976 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:01 crc kubenswrapper[4730]: E0202 07:28:01.253012 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.263827 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:44:05.365953361 +0000 UTC Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.303972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.304031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.304043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.304066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.304079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.407536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.407584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.407594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.407614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.407627 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.511128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.511246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.511263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.511284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.511295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.574426 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/2.log" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.578868 4730 scope.go:117] "RemoveContainer" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" Feb 02 07:28:01 crc kubenswrapper[4730]: E0202 07:28:01.579117 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.596692 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.614829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.614864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.614876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.614915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.614929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.617140 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.635740 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.648537 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.671292 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.685222 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.706815 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.718020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.718087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.718112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.718145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.718199 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.723902 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.746294 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.765254 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.780143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.795668 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.810465 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.822205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.822273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.822291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.822317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.822342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.834184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.851899 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.866841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.886514 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:01Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.925920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.925978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.925996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.926021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:01 crc kubenswrapper[4730]: I0202 07:28:01.926043 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:01Z","lastTransitionTime":"2026-02-02T07:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.028729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.028789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.028805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.028829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.028846 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.131713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.131781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.131797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.131821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.131863 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.235773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.235854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.235874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.235905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.235924 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.252673 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:02 crc kubenswrapper[4730]: E0202 07:28:02.253199 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.264646 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:37:03.101670568 +0000 UTC Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.339642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.339705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.339723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.339746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.339766 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.442547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.442649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.442677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.442711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.442731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.545440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.545514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.545537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.545569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.545593 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.648992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.649036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.649050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.649070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.649083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.751609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.751658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.751667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.751683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.751695 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.854414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.854449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.854470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.854486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.854499 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.957453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.957501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.957518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.957540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:02 crc kubenswrapper[4730]: I0202 07:28:02.957556 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:02Z","lastTransitionTime":"2026-02-02T07:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.060416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.060462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.060478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.060502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.060519 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.163090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.163138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.163154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.163204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.163221 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.252894 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.253005 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:03 crc kubenswrapper[4730]: E0202 07:28:03.253077 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.253202 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:03 crc kubenswrapper[4730]: E0202 07:28:03.253408 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:03 crc kubenswrapper[4730]: E0202 07:28:03.253523 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.264991 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:27:57.342916862 +0000 UTC Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.265611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.265682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.265701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.265726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.265743 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.368925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.369390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.369453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.369488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.369512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.472258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.472316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.472340 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.472371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.472396 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.574628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.574704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.574728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.574759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.574783 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.678278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.678315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.678325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.678342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.678354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.781652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.781709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.781727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.781751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.781787 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.884816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.884884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.884909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.884939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.884961 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.988740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.988783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.988796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.988819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.988831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.990991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.991026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.991040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.991054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:03 crc kubenswrapper[4730]: I0202 07:28:03.991064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:03Z","lastTransitionTime":"2026-02-02T07:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.011602 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:04Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.015378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.015513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.015594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.015673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.015753 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.034412 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:04Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.038644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.038678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.038689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.038704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.038715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.052094 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:04Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.056937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.057012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.057034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.057060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.057079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.076110 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:04Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.080711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.080776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.080796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.080820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.080837 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.100584 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:04Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.100710 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.103073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.103097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.103105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.103122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.103135 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.206377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.206449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.206467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.206497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.206521 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.252701 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:04 crc kubenswrapper[4730]: E0202 07:28:04.252977 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.265388 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:48:55.818672815 +0000 UTC Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.309901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.309968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.309984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.310011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.310029 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.413614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.413671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.413681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.413699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.413711 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.516998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.517059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.517075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.517099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.517112 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.620510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.620572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.620595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.620628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.620648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.722883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.722922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.722932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.722948 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.722961 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.826463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.826529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.826551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.826576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.826595 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.929723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.929778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.929794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.929816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:04 crc kubenswrapper[4730]: I0202 07:28:04.929836 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:04Z","lastTransitionTime":"2026-02-02T07:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.032289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.032358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.032376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.032398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.032415 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.146412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.146477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.146495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.146518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.146537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.248804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.248853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.248918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.248962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.248989 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.253424 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.253538 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:05 crc kubenswrapper[4730]: E0202 07:28:05.253617 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:05 crc kubenswrapper[4730]: E0202 07:28:05.253713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.253813 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:05 crc kubenswrapper[4730]: E0202 07:28:05.254052 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.265650 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:29:11.255679232 +0000 UTC Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.363457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.363496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.363507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.363524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.363538 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.466236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.466284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.466296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.466312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.466323 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.568348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.568422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.568445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.568476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.568497 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.670723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.670792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.670808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.670833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.670854 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.773689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.773748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.773765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.773788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.773805 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.876406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.876474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.876526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.876678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.876709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.978747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.978807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.978824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.978847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:05 crc kubenswrapper[4730]: I0202 07:28:05.978863 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:05Z","lastTransitionTime":"2026-02-02T07:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.081882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.081920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.081930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.081945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.081954 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.185032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.185086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.185103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.185125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.185143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.251986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:06 crc kubenswrapper[4730]: E0202 07:28:06.252118 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.266483 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:14:32.93169396 +0000 UTC Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.287653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.287717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.287741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.287768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.287792 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.390584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.390643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.390659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.390686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.390701 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.493226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.493283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.493299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.493325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.493342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.595218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.595286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.595309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.595341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.595364 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.698759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.698821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.698838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.698863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.698880 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.801265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.801353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.801371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.801393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.801410 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.904078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.904122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.904131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.904146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:06 crc kubenswrapper[4730]: I0202 07:28:06.904170 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:06Z","lastTransitionTime":"2026-02-02T07:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.006610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.006687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.006705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.006730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.006750 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.109931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.110024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.110044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.110066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.110083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.212283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.212329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.212345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.212366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.212384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.252084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.252083 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:07 crc kubenswrapper[4730]: E0202 07:28:07.252213 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.252277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:07 crc kubenswrapper[4730]: E0202 07:28:07.252302 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:07 crc kubenswrapper[4730]: E0202 07:28:07.252428 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.266934 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:46:13.146491769 +0000 UTC Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.270849 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.289676 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.303322 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.315143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.315191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.315202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.315220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.315231 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.317645 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.331899 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.345835 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.378544 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.391761 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.405945 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.417092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.417122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.417131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.417147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.417173 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.422717 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.445425 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.466064 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.482309 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.496421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.513919 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.519573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.519630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.519648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.519671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.519687 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.531238 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.548419 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:07Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.625858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.625906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.625923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.625947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.625965 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.729014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.729070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.729089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.729113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.729133 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.831694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.831743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.831776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.831802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.831819 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.934875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.935290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.935455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.935604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:07 crc kubenswrapper[4730]: I0202 07:28:07.935730 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:07Z","lastTransitionTime":"2026-02-02T07:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.039071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.039417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.039428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.039443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.039452 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.142679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.142743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.142765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.142793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.142815 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.245700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.245747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.245760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.245777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.245789 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.252360 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:08 crc kubenswrapper[4730]: E0202 07:28:08.252543 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.267091 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:01:13.08348131 +0000 UTC Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.348972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.349045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.349057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.349073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.349085 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.452609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.452678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.452699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.452727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.452749 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.555351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.555410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.555427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.555450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.555470 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.657543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.657590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.657601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.657616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.657627 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.760877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.760932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.760949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.760973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.760991 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.864283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.864332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.864347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.864365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.864376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.967827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.967875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.967884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.967902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:08 crc kubenswrapper[4730]: I0202 07:28:08.967913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:08Z","lastTransitionTime":"2026-02-02T07:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.071179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.071229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.071239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.071254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.071265 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.173979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.174046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.174065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.174089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.174108 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.253060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:09 crc kubenswrapper[4730]: E0202 07:28:09.253252 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.253395 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:09 crc kubenswrapper[4730]: E0202 07:28:09.253473 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.253548 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:09 crc kubenswrapper[4730]: E0202 07:28:09.253621 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.267535 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:51:28.789159993 +0000 UTC Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.281520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.281593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.281612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.281636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.281654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.384472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.384521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.384531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.384547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.384558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.487446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.487484 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.487493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.487506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.487516 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.590175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.590204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.590213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.590226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.590236 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.693004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.693035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.693046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.693062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.693072 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.798052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.798116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.798135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.798201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.798220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.900010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.900104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.900122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.900144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:09 crc kubenswrapper[4730]: I0202 07:28:09.900197 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:09Z","lastTransitionTime":"2026-02-02T07:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.003015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.003074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.003091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.003115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.003133 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.105367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.105432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.105453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.105477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.105493 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.207403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.207453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.207463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.207480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.207490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.252448 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:10 crc kubenswrapper[4730]: E0202 07:28:10.252648 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.268026 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:35:20.924847088 +0000 UTC Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.311646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.311718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.311776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.311793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.311806 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.413932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.413998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.414008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.414022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.414084 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.516656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.516702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.516714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.516730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.516741 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.618986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.619047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.619079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.619111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.619134 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.721898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.721961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.721974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.721993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.722011 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.824726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.824789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.824810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.824839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.824860 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.945812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.945864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.945885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.945912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:10 crc kubenswrapper[4730]: I0202 07:28:10.945933 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:10Z","lastTransitionTime":"2026-02-02T07:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.048374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.048417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.048433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.048455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.048472 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.151843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.151904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.151924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.151948 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.151966 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.252011 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.252076 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:11 crc kubenswrapper[4730]: E0202 07:28:11.252237 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.252277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:11 crc kubenswrapper[4730]: E0202 07:28:11.252408 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:11 crc kubenswrapper[4730]: E0202 07:28:11.252654 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.253494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.253551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.253569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.253591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.253609 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.268173 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:09:11.199999335 +0000 UTC Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.355850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.355874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.355883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.355898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.355909 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.457983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.458013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.458021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.458035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.458045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.561212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.561284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.561301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.561329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.561348 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.664009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.664067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.664080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.664100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.664113 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.766971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.767022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.767034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.767052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.767064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.869009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.869054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.869067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.869088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.869103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.971446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.971495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.971506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.971520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:11 crc kubenswrapper[4730]: I0202 07:28:11.971532 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:11Z","lastTransitionTime":"2026-02-02T07:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.073653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.073707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.073720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.073738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.073751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.176118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.176178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.176194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.176215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.176227 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.252826 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:12 crc kubenswrapper[4730]: E0202 07:28:12.253014 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.268842 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:52:42.199031884 +0000 UTC Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.278728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.278769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.278779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.278796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.278806 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.380848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.380922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.380935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.380950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.380962 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.483597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.483653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.483675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.483698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.483715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.587263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.587324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.587342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.587365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.587385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.690296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.690332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.690342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.690358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.690370 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.793622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.793676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.793687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.793709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.793727 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.896904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.896970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.896986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.897012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.897027 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.999890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.999927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:12 crc kubenswrapper[4730]: I0202 07:28:12.999938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:12.999952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:12.999962 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:12Z","lastTransitionTime":"2026-02-02T07:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.102321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.102361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.102372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.102389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.102401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.205098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.205156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.205188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.205210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.205220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.252963 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.253082 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.253091 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.253150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.253686 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.253742 4730 scope.go:117] "RemoveContainer" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.253790 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.254211 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.269570 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:52:50.778334697 +0000 UTC Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.307939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.308002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.308016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.308039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.308053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.414040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.414122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.414157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.414210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.414230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.517628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.517672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.517681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.517696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.517706 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.619625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.619687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.619697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.619716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.619730 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.722449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.722529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.722554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.722585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.722604 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.748089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.748239 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:28:13 crc kubenswrapper[4730]: E0202 07:28:13.748294 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:28:45.74827965 +0000 UTC m=+99.169482998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.824677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.824726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.824740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.824767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.824781 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.927094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.927137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.927148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.927198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:13 crc kubenswrapper[4730]: I0202 07:28:13.927211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:13Z","lastTransitionTime":"2026-02-02T07:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.030601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.030642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.030655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.030671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.030687 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.132942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.132979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.132990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.133005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.133017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.235905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.235967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.235993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.236022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.236041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.252431 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.252610 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.270400 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:47:25.970283019 +0000 UTC Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.338265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.338293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.338302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.338316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.338326 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.441087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.441143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.441192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.441236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.441254 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.502048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.502097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.502113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.502138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.502155 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.522685 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:14Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.527186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.527222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.527230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.527246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.527256 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.542684 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:14Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.546243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.546290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.546300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.546312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.546321 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.563498 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:14Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.567479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.567532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.567553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.567577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.567595 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.579936 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:14Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.584243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.584299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.584316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.584339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.584355 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.600966 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:14Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:14 crc kubenswrapper[4730]: E0202 07:28:14.601074 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.602509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.602536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.602545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.602558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.602568 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.704841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.704883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.704893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.704909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.704921 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.807255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.807306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.807343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.807362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.807376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.909921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.909960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.909970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.909984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:14 crc kubenswrapper[4730]: I0202 07:28:14.909996 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:14Z","lastTransitionTime":"2026-02-02T07:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.012019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.012051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.012059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.012071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.012080 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.115003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.115052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.115068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.115088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.115102 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.217823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.217870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.217885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.217905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.217919 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.252425 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.252497 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.252540 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:15 crc kubenswrapper[4730]: E0202 07:28:15.252630 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:15 crc kubenswrapper[4730]: E0202 07:28:15.252688 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:15 crc kubenswrapper[4730]: E0202 07:28:15.252747 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.271436 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:31:59.099953257 +0000 UTC Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.319971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.320007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.320019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.320034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.320047 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.422024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.422056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.422064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.422078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.422088 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.524395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.524478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.524502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.524531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.524553 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.625594 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/0.log" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.625655 4730 generic.go:334] "Generic (PLEG): container finished" podID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" containerID="2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996" exitCode=1 Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.625686 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerDied","Data":"2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626183 4730 scope.go:117] "RemoveContainer" containerID="2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.626581 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.644255 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.662926 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.675392 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.688837 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.701427 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.713730 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.727394 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.728724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.728763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.728772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.728798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.728808 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.742222 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.754460 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.767514 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.783830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.793237 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.810143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.821033 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.830417 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.831481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.831533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.831546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.831562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.831574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.843401 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.858411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:15Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.934721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.934778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.934786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.934801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:15 crc kubenswrapper[4730]: I0202 07:28:15.934811 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:15Z","lastTransitionTime":"2026-02-02T07:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.036700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.036756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.036766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.036780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.036790 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.139115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.139156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.139179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.139192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.139201 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.241991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.242041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.242050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.242062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.242071 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.252318 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:16 crc kubenswrapper[4730]: E0202 07:28:16.252460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.272025 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:19:51.329132733 +0000 UTC Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.344328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.344368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.344380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.344395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.344406 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.446461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.446494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.446503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.446518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.446528 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.548895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.548922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.548931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.548960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.548970 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.632320 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/0.log" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.632382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerStarted","Data":"2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.645794 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.651295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.651321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.651331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.651345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.651354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.659863 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.670715 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.685825 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.700779 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.711937 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.729245 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.746385 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.753325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.753374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.753386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.753404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.753420 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.776063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.793199 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.809876 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.827427 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.845415 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.855724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.855765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.855799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.855819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.855830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.859439 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.872444 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.885760 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.900748 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:16Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.958879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.959010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.959023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.959075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:16 crc kubenswrapper[4730]: I0202 07:28:16.959089 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:16Z","lastTransitionTime":"2026-02-02T07:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.061732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.061770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.061779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.061793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.061802 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.164308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.164348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.164360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.164377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.164388 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.253147 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.253201 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.253201 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:17 crc kubenswrapper[4730]: E0202 07:28:17.253285 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:17 crc kubenswrapper[4730]: E0202 07:28:17.253384 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:17 crc kubenswrapper[4730]: E0202 07:28:17.253578 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.265409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.266689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.266727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.266737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.266757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.266768 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.273041 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:26:06.778081921 +0000 UTC Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.278523 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.290483 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.300127 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.320115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.330628 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.345027 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.362743 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.368841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.368881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.368892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.368909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.368922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.377799 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.389156 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.400731 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.409608 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.422484 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.434670 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.444851 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.456181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.469684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:17Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.470928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.470958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.470967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.470981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.470992 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.573353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.573404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.573416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.573430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.573439 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.675615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.675651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.675661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.675678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.675686 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.778347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.778387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.778398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.778414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.778424 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.879820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.879848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.879855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.879867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.879876 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.981228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.981253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.981261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.981273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:17 crc kubenswrapper[4730]: I0202 07:28:17.981281 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:17Z","lastTransitionTime":"2026-02-02T07:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.083320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.083352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.083363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.083378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.083389 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.184847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.184879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.184888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.184901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.184910 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.252506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:18 crc kubenswrapper[4730]: E0202 07:28:18.252604 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.273677 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:12:17.002692641 +0000 UTC Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.287377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.287398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.287409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.287422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.287432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.389911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.389952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.389960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.389975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.389984 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.491808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.491848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.491862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.491881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.491893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.594017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.594045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.594054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.594069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.594080 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.699087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.699137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.699153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.699206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.699224 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.801420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.801470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.801480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.801494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.801505 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.903949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.904022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.904042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.904065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:18 crc kubenswrapper[4730]: I0202 07:28:18.904082 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:18Z","lastTransitionTime":"2026-02-02T07:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.006648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.006693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.006705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.006719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.006731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.109579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.109612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.109622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.109638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.109648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.211908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.211965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.211981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.212006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.212027 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.252429 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.252438 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.252491 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:19 crc kubenswrapper[4730]: E0202 07:28:19.252635 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:19 crc kubenswrapper[4730]: E0202 07:28:19.252771 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:19 crc kubenswrapper[4730]: E0202 07:28:19.252960 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.274219 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:59:03.124065704 +0000 UTC Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.314532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.314575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.314589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.314607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.314621 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.417142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.417203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.417213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.417228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.417238 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.520258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.520303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.520316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.520330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.520341 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.622546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.622575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.622583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.622597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.622607 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.724843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.724902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.724920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.724945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.724963 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.826934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.826974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.826985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.827002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.827013 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.929291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.929337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.929347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.929363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:19 crc kubenswrapper[4730]: I0202 07:28:19.929374 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:19Z","lastTransitionTime":"2026-02-02T07:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.031523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.031564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.031575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.031587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.031597 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.133652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.133706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.133721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.133740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.133752 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.236321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.236404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.236422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.236447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.236464 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.252840 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:20 crc kubenswrapper[4730]: E0202 07:28:20.253026 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.274862 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:06:12.451882608 +0000 UTC Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.339414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.339460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.339472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.339493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.339506 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.442068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.442122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.442138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.442183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.442204 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.544322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.544376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.544390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.544408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.544419 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.647040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.647075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.647083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.647098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.647107 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.749736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.749808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.749832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.749864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.749885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.852331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.852381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.852396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.852413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.852423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.955110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.955206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.955224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.955248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:20 crc kubenswrapper[4730]: I0202 07:28:20.955266 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:20Z","lastTransitionTime":"2026-02-02T07:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.058269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.058305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.058315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.058330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.058342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.161326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.161391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.161414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.161441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.161459 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.252533 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.252685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:21 crc kubenswrapper[4730]: E0202 07:28:21.252743 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.252764 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:21 crc kubenswrapper[4730]: E0202 07:28:21.253833 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:21 crc kubenswrapper[4730]: E0202 07:28:21.257405 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.265508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.265554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.265566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.265582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.265593 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.275950 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:57:49.588919113 +0000 UTC Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.368526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.368577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.368605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.368626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.368640 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.471586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.471641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.471660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.471688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.471741 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.574645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.574701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.574717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.574741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.574768 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.677301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.677347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.677359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.677431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.677464 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.780069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.780110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.780122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.780139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.780152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.883344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.883380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.883391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.883410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.883423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.987213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.987285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.987303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.987330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:21 crc kubenswrapper[4730]: I0202 07:28:21.987352 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:21Z","lastTransitionTime":"2026-02-02T07:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.089247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.089285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.089294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.089309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.089320 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.192620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.192671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.192692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.192720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.192742 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.252451 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:22 crc kubenswrapper[4730]: E0202 07:28:22.252638 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.276710 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:26:16.279981526 +0000 UTC Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.296224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.296285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.296302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.296330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.296348 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.399608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.399666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.399685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.399708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.399726 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.502915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.502963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.502976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.502992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.503003 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.605861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.605897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.605909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.605926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.605938 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.708124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.708191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.708202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.708218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.708230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.810863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.810921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.810939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.810962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.810981 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.915041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.915610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.915964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.916277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:22 crc kubenswrapper[4730]: I0202 07:28:22.916469 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:22Z","lastTransitionTime":"2026-02-02T07:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.020208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.020267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.020284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.020308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.020328 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.123507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.123538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.123546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.123563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.123572 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.225685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.225744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.225761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.225784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.225801 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.252584 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.252631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:23 crc kubenswrapper[4730]: E0202 07:28:23.252704 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.252722 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:23 crc kubenswrapper[4730]: E0202 07:28:23.252851 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:23 crc kubenswrapper[4730]: E0202 07:28:23.252928 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.277466 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:53:01.189958844 +0000 UTC Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.328929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.328983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.328993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.329009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.329019 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.431877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.431915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.431923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.431937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.431948 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.534375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.534419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.534436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.534457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.534475 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.637856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.637921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.637942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.637971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.637993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.741116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.741170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.741193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.741215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.741226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.844039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.844096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.844115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.844140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.844158 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.946462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.946493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.946502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.946517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:23 crc kubenswrapper[4730]: I0202 07:28:23.946576 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:23Z","lastTransitionTime":"2026-02-02T07:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.049025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.049048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.049055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.049067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.049076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.152451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.152511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.152529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.152553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.152569 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.252909 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.253129 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.254877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.254929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.254951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.254980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.255002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.278494 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:38:42.321336941 +0000 UTC Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.357064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.357108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.357126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.357152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.357223 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.460764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.460814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.460831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.460854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.460872 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.563580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.563644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.563661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.563686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.563704 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.665488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.665557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.665574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.665599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.665617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.668766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.668824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.668841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.668863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.668880 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.688258 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:24Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.692946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.693008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.693030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.693059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.693085 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.711467 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:24Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.715888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.715944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.715961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.715986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.716004 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.734506 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:24Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.738261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.738292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.738301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.738316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.738342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.755654 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:24Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.760334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.760386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.760405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.760427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.760444 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.778982 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:24Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:24 crc kubenswrapper[4730]: E0202 07:28:24.779334 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.780968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.781013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.781031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.781058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.781076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.883849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.883930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.883947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.883973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.883990 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.986781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.986834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.986850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.986878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:24 crc kubenswrapper[4730]: I0202 07:28:24.986895 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:24Z","lastTransitionTime":"2026-02-02T07:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.089980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.090036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.090054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.090077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.090096 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.193694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.193749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.193766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.193822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.193841 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.252774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.252825 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:25 crc kubenswrapper[4730]: E0202 07:28:25.252997 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:25 crc kubenswrapper[4730]: E0202 07:28:25.253098 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.253102 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:25 crc kubenswrapper[4730]: E0202 07:28:25.253880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.278974 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:51:28.81910515 +0000 UTC Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.296269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.296361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.296380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.296404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.296421 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.399685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.399740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.399756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.399778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.399794 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.502488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.502551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.502573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.502597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.502614 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.605290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.605352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.605366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.605381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.605391 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.707976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.708044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.708055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.708070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.708083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.810705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.810760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.810777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.810802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.810827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.913504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.913597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.913616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.913640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:25 crc kubenswrapper[4730]: I0202 07:28:25.913657 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:25Z","lastTransitionTime":"2026-02-02T07:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.016316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.016356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.016367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.016383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.016397 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.119408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.119460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.119477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.119500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.119517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.222315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.222376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.222392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.222417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.222435 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.252153 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:26 crc kubenswrapper[4730]: E0202 07:28:26.252328 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.279213 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:51:23.568083639 +0000 UTC Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.324550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.324717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.324737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.324790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.324808 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.427849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.427927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.427938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.427958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.427968 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.531494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.531554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.531564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.531579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.531590 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.634555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.634602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.634612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.634627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.634641 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.736908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.736957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.736974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.737029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.737046 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.839277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.839348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.839372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.839400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.839419 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.941937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.941978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.941987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.942000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:26 crc kubenswrapper[4730]: I0202 07:28:26.942012 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:26Z","lastTransitionTime":"2026-02-02T07:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.044042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.044080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.044089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.044102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.044111 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.146559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.146609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.146618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.146633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.146642 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.248613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.248642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.248651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.248666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.248678 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.252191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.252195 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.253303 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:27 crc kubenswrapper[4730]: E0202 07:28:27.253412 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:27 crc kubenswrapper[4730]: E0202 07:28:27.253605 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:27 crc kubenswrapper[4730]: E0202 07:28:27.254310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.254756 4730 scope.go:117] "RemoveContainer" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.266176 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.279953 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:34:12.777594519 +0000 UTC Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.284841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.297982 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.310768 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.321093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.347211 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.350610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.350668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.350678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.350693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.350704 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.360321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.376937 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.392497 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.405088 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.416556 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.434126 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.446420 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.452960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.453035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.453054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.453076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.453093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.459045 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.475249 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.490585 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.502080 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.555819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.555862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.555870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.555885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.555894 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.660330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.660382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.660392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.660407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.660419 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.669985 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/2.log" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.677496 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.678312 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.715467 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.734741 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.785931 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.787496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.787526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.787535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.787549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.787558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.813179 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.828499 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.836804 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.863316 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.873241 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.885622 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.889635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.889670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.889679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.889693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.889702 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.898072 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.910189 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.922933 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.939321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.951205 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.961391 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.974156 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.986877 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:27Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.991281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.991300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.991308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.991322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:27 crc kubenswrapper[4730]: I0202 07:28:27.991331 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:27Z","lastTransitionTime":"2026-02-02T07:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.093839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.093884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.093902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.093923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.093939 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.197536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.197594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.197616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.197645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.197666 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.252776 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:28 crc kubenswrapper[4730]: E0202 07:28:28.252905 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.280904 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:44:03.262961398 +0000 UTC Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.299583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.299604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.299611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.299624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.299633 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.401946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.402021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.402046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.402078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.402097 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.504994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.505039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.505050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.505067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.505079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.607113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.607197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.607209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.607224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.607235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.682875 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/3.log" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.683716 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/2.log" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.687125 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" exitCode=1 Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.687209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.687263 4730 scope.go:117] "RemoveContainer" containerID="90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.688325 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:28:28 crc kubenswrapper[4730]: E0202 07:28:28.688680 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.708075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f1a18a87d9ff40eab897c9778181cc5f04e179c61a98a1adffe188e04c2f31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:00Z\\\",\\\"message\\\":\\\"160\\\\nI0202 07:28:00.183596 6432 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183708 6432 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.183987 6432 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184128 6432 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 07:28:00.184148 6432 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 07:28:00.184200 6432 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 07:28:00.186250 6432 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 07:28:00.186287 6432 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 07:28:00.186323 6432 factory.go:656] Stopping watch factory\\\\nI0202 07:28:00.186347 6432 ovnkube.go:599] Stopped ovnkube\\\\nI0202 07:28:00.186323 6432 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:28Z\\\",\\\"message\\\":\\\"ft-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 07:28:28.200411 6785 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z]\\\\nI0202 07:28:28.200429 6785 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:28:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.709783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.709814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.709824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.709842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.709853 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.723952 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.738022 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.757398 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.772986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.787495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.805950 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.811959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.812005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.812022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.812043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.812058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.821510 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.836994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.849445 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.861854 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.874268 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.888964 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.905009 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.914574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.914606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.914617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.914632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.914642 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:28Z","lastTransitionTime":"2026-02-02T07:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.917627 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.931738 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:28 crc kubenswrapper[4730]: I0202 07:28:28.947040 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.018375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.018451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.018477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.018509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.018545 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.121609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.121667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.121679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.121693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.121702 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.225365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.225424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.225440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.225465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.225482 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.253079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.253192 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.253233 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:29 crc kubenswrapper[4730]: E0202 07:28:29.253295 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:29 crc kubenswrapper[4730]: E0202 07:28:29.253427 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:29 crc kubenswrapper[4730]: E0202 07:28:29.253524 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.281761 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:39:13.920836708 +0000 UTC Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.327827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.327882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.327898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.327921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.327939 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.431108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.431196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.431221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.431249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.431269 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.534126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.534218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.534237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.534260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.534277 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.636874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.636920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.636934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.636952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.636965 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.691894 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/3.log" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.695711 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:28:29 crc kubenswrapper[4730]: E0202 07:28:29.695863 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.714723 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.734255 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.739579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.739703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.739786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.739895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.739991 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.757955 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.775891 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.799620 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.819421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.831637 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.843334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.843682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.843860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.844012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.844133 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.845141 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.859583 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.876402 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.890751 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.902436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.914697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.928251 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.939122 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.947215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.947273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.947289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.947308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.947322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:29Z","lastTransitionTime":"2026-02-02T07:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.964472 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:28Z\\\",\\\"message\\\":\\\"ft-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 07:28:28.200411 6785 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z]\\\\nI0202 07:28:28.200429 6785 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:28:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:29 crc kubenswrapper[4730]: I0202 07:28:29.978874 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:29Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.050709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.050750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.050762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.050780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.050792 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.154002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.154076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.154100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.154130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.154151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.252769 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:30 crc kubenswrapper[4730]: E0202 07:28:30.252929 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.256883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.256910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.256938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.256951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.256961 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.282245 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:14:38.843542776 +0000 UTC Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.359806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.359847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.359855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.359870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.359878 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.462084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.462118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.462126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.462140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.462149 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.564126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.564187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.564202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.564222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.564236 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.666971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.667064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.667082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.667103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.667120 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.770063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.770136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.770211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.770250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.770272 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.872491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.872534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.872545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.872563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.872575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.975083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.975120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.975132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.975148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:30 crc kubenswrapper[4730]: I0202 07:28:30.975177 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:30Z","lastTransitionTime":"2026-02-02T07:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.077549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.077601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.077617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.077639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.077657 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.141897 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.142058 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142076 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.142050744 +0000 UTC m=+148.563254102 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.142108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142134 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.142142 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142192 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.142181257 +0000 UTC m=+148.563384605 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.142209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142276 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142293 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142305 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142307 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142320 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142330 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142332 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142345 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.142335161 +0000 UTC m=+148.563538519 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142447 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.142421673 +0000 UTC m=+148.563625111 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.142463 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.142456334 +0000 UTC m=+148.563659802 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.180582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.180618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.180628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.180644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.180654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.252893 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.252961 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.253016 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.253126 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.253216 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:31 crc kubenswrapper[4730]: E0202 07:28:31.253272 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.282524 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:27:40.107787671 +0000 UTC Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.283022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.283068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.283085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.283105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.283123 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.385831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.385895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.385928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.385958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.385979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.489067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.489129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.489152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.489210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.489232 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.592722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.592791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.592814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.592842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.592865 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.697089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.697138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.697147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.697176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.697187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.800814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.800883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.800907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.800935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.800957 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.904128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.904214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.904236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.904259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:31 crc kubenswrapper[4730]: I0202 07:28:31.904276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:31Z","lastTransitionTime":"2026-02-02T07:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.007534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.008044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.008058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.008074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.008088 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.111392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.111457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.111475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.111499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.111518 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.213999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.214046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.214055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.214069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.214079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.252105 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:32 crc kubenswrapper[4730]: E0202 07:28:32.252262 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.283346 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:14:49.444791095 +0000 UTC Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.316311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.316377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.316400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.316430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.316455 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.419452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.419524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.419546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.419575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.419598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.522644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.522710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.522733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.522760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.522782 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.625308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.625379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.625404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.625437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.625460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.729066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.729537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.729557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.729578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.729598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.832836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.832894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.832910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.832932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.832951 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.936132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.936241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.936264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.936293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:32 crc kubenswrapper[4730]: I0202 07:28:32.936314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:32Z","lastTransitionTime":"2026-02-02T07:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.039584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.039642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.039664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.039694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.039713 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.143716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.143762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.143771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.143790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.143800 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.247066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.247144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.247192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.247224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.247246 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.252529 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.252569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.252580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:33 crc kubenswrapper[4730]: E0202 07:28:33.252717 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:33 crc kubenswrapper[4730]: E0202 07:28:33.252811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:33 crc kubenswrapper[4730]: E0202 07:28:33.252987 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.284208 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:24:39.874952233 +0000 UTC Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.349845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.349913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.349931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.349957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.349978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.453370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.453453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.453476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.453505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.453526 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.556756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.556809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.556831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.556860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.556883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.659772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.659827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.659837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.659851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.659862 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.763662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.763730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.763753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.763784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.763811 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.866513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.866579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.866601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.866629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.866651 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.969666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.969743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.969771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.969815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:33 crc kubenswrapper[4730]: I0202 07:28:33.969843 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:33Z","lastTransitionTime":"2026-02-02T07:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.073439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.073480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.073489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.073507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.073518 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.175652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.175706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.175717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.175738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.175751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.252935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:34 crc kubenswrapper[4730]: E0202 07:28:34.253128 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.278097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.278143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.278153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.278185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.278196 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.284845 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:05:39.716482174 +0000 UTC Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.380658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.380699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.380713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.380729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.380740 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.483919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.483982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.483997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.484018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.484033 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.586989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.587059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.587079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.587108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.587129 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.690767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.690810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.690824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.690839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.690849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.793673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.793713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.793722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.793737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.793746 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.896995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.897082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.897101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.897126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:34 crc kubenswrapper[4730]: I0202 07:28:34.897150 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:34Z","lastTransitionTime":"2026-02-02T07:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.000366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.000429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.000446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.000469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.000490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.015642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.015698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.015715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.015737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.015754 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.036112 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.040987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.041026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.041036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.041051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.041061 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.059453 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.064130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.064179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.064189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.064205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.064217 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.082390 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.087373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.087430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.087454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.087483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.087506 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.108053 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.112370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.112524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.112551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.112578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.112598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.130640 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:35Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.130810 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.132512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.132535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.132544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.132555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.132566 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.238652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.238717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.238735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.238760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.238789 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.252828 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.252923 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.253026 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.253061 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.253315 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:35 crc kubenswrapper[4730]: E0202 07:28:35.253464 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.268432 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.285309 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:49:12.284352603 +0000 UTC Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.341438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.341679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.341765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.341836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.341909 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.444778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.444827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.444846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.444869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.444885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.548081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.548132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.548149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.548210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.548230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.651536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.651625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.651645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.651667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.651683 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.753972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.754255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.754338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.754415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.754510 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.857461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.857677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.857752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.857842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.857925 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.959991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.960043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.960059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.960082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:35 crc kubenswrapper[4730]: I0202 07:28:35.960103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:35Z","lastTransitionTime":"2026-02-02T07:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.062264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.062294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.062304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.062318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.062329 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.165133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.165361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.165446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.165526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.165607 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.252560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:36 crc kubenswrapper[4730]: E0202 07:28:36.252905 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.267669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.267737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.267760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.267790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.267813 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.286231 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:40:18.34079323 +0000 UTC Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.370255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.370515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.370605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.370692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.370777 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.473531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.473593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.473615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.473643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.473663 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.576314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.576386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.576412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.576442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.576463 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.678483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.678769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.678844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.678919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.678985 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.782243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.782295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.782312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.782336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.782353 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.885298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.885542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.885609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.885684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.885755 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.988059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.988106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.988117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.988134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:36 crc kubenswrapper[4730]: I0202 07:28:36.988146 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:36Z","lastTransitionTime":"2026-02-02T07:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.090774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.090822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.090832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.090849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.090860 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.193685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.193748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.193770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.193799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.193823 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.252768 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:37 crc kubenswrapper[4730]: E0202 07:28:37.252965 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.253328 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.253405 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:37 crc kubenswrapper[4730]: E0202 07:28:37.253495 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:37 crc kubenswrapper[4730]: E0202 07:28:37.253543 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.274050 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.286893 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:25:50.348951056 +0000 UTC Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.293372 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.296683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.296755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.296774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.296798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.296821 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.311562 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.330526 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.345412 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.366530 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.385813 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399399 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.399697 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.415496 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.428906 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381d926c-779d-4f44-8145-80e994fd7e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5359afffc6f8581ade91768d6db7cfc13fec7245a3d5c01fb0815948e4619b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.444027 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.460142 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.473199 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.490945 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.503220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.503283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.503308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.503337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.503358 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.510608 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.529765 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.543606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.566991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:28Z\\\",\\\"message\\\":\\\"ft-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 07:28:28.200411 6785 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z]\\\\nI0202 07:28:28.200429 6785 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:28:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:37Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.605801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.606013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.606023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.606037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.606046 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.709209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.709284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.709309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.709339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.709360 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.811596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.811667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.811692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.811721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.811747 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.914245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.914280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.914291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.914306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:37 crc kubenswrapper[4730]: I0202 07:28:37.914317 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:37Z","lastTransitionTime":"2026-02-02T07:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.016928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.016972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.016984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.017004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.017015 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.119594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.119661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.119685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.119716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.119739 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.221722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.221811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.221820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.221835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.221846 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.252393 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:38 crc kubenswrapper[4730]: E0202 07:28:38.252743 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.271008 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.288017 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:08:37.879157946 +0000 UTC Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.324763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.324815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.324825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.324840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.324852 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.427487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.427542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.427559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.427585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.427609 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.530358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.530402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.530415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.530430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.530440 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.633596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.633647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.633665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.633689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.633706 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.736268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.736342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.736367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.736394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.736416 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.838177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.838206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.838214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.838227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.838235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.941136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.941255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.941277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.941306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:38 crc kubenswrapper[4730]: I0202 07:28:38.941329 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:38Z","lastTransitionTime":"2026-02-02T07:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.043847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.043931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.043968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.043998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.044021 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.146935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.146989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.147005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.147030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.147047 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.249670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.249731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.249752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.249778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.249798 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.253374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.253454 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.253382 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:39 crc kubenswrapper[4730]: E0202 07:28:39.253565 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:39 crc kubenswrapper[4730]: E0202 07:28:39.253766 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:39 crc kubenswrapper[4730]: E0202 07:28:39.253960 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.288730 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:51:35.42622602 +0000 UTC Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.352615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.352683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.352701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.352723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.352741 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.455691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.455759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.455784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.455810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.455833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.559062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.559121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.559143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.559208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.559235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.662975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.663019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.663034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.663057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.663075 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.765937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.765969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.765980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.765997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.766007 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.869602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.869672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.869694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.869726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.869748 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.972518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.972592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.972614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.972641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:39 crc kubenswrapper[4730]: I0202 07:28:39.972658 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:39Z","lastTransitionTime":"2026-02-02T07:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.075617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.075670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.075687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.075708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.075725 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.178191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.178247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.178269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.178298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.178322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.252028 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:40 crc kubenswrapper[4730]: E0202 07:28:40.252243 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.280859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.280888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.280898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.280911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.280921 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.289244 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:32:05.103892323 +0000 UTC Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.384108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.384205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.384229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.384256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.384273 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.487306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.487361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.487380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.487402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.487418 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.590000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.590027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.590036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.590053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.590063 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.692678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.692706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.692716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.692729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.692738 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.795473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.795541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.795564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.795592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.795614 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.897854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.897919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.897945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.897974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:40 crc kubenswrapper[4730]: I0202 07:28:40.897999 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:40Z","lastTransitionTime":"2026-02-02T07:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.000825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.000897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.000920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.000952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.000970 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.103566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.103622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.103642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.103673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.103695 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.206707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.206757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.206767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.206783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.206794 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.252458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:41 crc kubenswrapper[4730]: E0202 07:28:41.252645 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.252702 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.252787 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:41 crc kubenswrapper[4730]: E0202 07:28:41.252863 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:41 crc kubenswrapper[4730]: E0202 07:28:41.252988 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.290464 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:09:33.106142156 +0000 UTC Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.311984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.312030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.312041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.312282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.312295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.415417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.415463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.415485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.415499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.415510 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.517860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.517917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.517934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.517957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.517973 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.620527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.620628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.620647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.620672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.620699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.723315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.723367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.723378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.723397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.723409 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.826754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.826816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.826832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.826857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.826874 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.929464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.929532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.929548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.929603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:41 crc kubenswrapper[4730]: I0202 07:28:41.929623 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:41Z","lastTransitionTime":"2026-02-02T07:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.032459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.032523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.032540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.032564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.032581 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.135310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.135370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.135389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.135413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.135430 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.238126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.238220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.238242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.238273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.238295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.253025 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:42 crc kubenswrapper[4730]: E0202 07:28:42.253586 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.253616 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:28:42 crc kubenswrapper[4730]: E0202 07:28:42.253997 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.291517 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:57:50.50362459 +0000 UTC Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.340675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.340716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.340725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.340740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.340749 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.444110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.444209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.444226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.444249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.444269 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.547682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.547719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.547729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.547744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.547754 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.650301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.650378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.650395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.650466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.650497 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.753554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.753594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.753604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.753621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.753636 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.856826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.856877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.856939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.856974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.856989 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.960346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.960402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.960420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.960443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:42 crc kubenswrapper[4730]: I0202 07:28:42.960459 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:42Z","lastTransitionTime":"2026-02-02T07:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.062880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.062913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.062924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.062938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.062948 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.165611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.165669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.165686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.165712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.165728 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.252704 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.252764 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:43 crc kubenswrapper[4730]: E0202 07:28:43.252836 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.252892 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:43 crc kubenswrapper[4730]: E0202 07:28:43.253323 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:43 crc kubenswrapper[4730]: E0202 07:28:43.253697 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.269408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.269480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.269505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.269534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.269561 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.291741 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:43:13.18383093 +0000 UTC Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.372984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.373022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.373031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.373045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.373054 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.475917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.475991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.476015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.476047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.476068 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.578913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.578954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.579099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.579142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.579207 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.681453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.681503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.681518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.681542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.681562 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.783642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.783696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.783712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.783735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.783752 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.886517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.886563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.886574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.886590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.886602 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.989912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.989979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.990001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.990032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:43 crc kubenswrapper[4730]: I0202 07:28:43.990053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:43Z","lastTransitionTime":"2026-02-02T07:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.092824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.092882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.092899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.092921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.092938 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.195306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.195348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.195358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.195372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.195381 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.252373 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:44 crc kubenswrapper[4730]: E0202 07:28:44.252574 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.292868 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:44:58.554093314 +0000 UTC Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.297984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.298036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.298055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.298078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.298108 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.400698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.400757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.400776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.400802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.400819 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.503051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.503091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.503100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.503113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.503124 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.605887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.605942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.605959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.605981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.605998 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.709308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.709369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.709387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.709413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.709432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.813006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.813100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.813119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.813199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.813219 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.916072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.916136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.916256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.916295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:44 crc kubenswrapper[4730]: I0202 07:28:44.916317 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:44Z","lastTransitionTime":"2026-02-02T07:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.019647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.019685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.019695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.019709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.019719 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.122411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.122534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.122552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.122613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.122636 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.133951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.134021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.134045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.134073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.134096 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.152893 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:45Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.157041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.157087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.157104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.157118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.157128 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.170379 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:45Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.174442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.174499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.174510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.174524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.174534 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.185987 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:45Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.189722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.189779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.189798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.189821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.189838 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.200811 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:45Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.203560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.203592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.203604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.203619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.203629 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.214371 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"978b4823-3590-49d6-b396-0b6ed8f87451\\\",\\\"systemUUID\\\":\\\"ce72a09f-80ac-4a93-a998-dc866b84ece7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:45Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.214531 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.224726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.224777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.224794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.224814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.224831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.252617 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.252736 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.252856 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.253035 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.253095 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.253306 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.293858 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:48:10.46585957 +0000 UTC Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.327689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.327723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.327731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.327747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.327756 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.430328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.430370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.430409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.430450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.430463 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.533085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.533140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.533156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.533227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.533245 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.635884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.635934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.635944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.635960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.635975 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.737907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.737981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.737992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.738009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.738019 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.792144 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.792305 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:28:45 crc kubenswrapper[4730]: E0202 07:28:45.792404 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs podName:f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc nodeName:}" failed. No retries permitted until 2026-02-02 07:29:49.792383865 +0000 UTC m=+163.213587303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs") pod "network-metrics-daemon-xrjth" (UID: "f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.839816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.839848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.839857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.839870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.839879 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.943430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.943492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.943523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.943564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:45 crc kubenswrapper[4730]: I0202 07:28:45.943587 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:45Z","lastTransitionTime":"2026-02-02T07:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.045590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.045651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.045667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.045691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.045708 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.148664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.148736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.148749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.148766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.148778 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:46 crc kubenswrapper[4730]: E0202 07:28:46.252415 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.252677 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.294817 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:40:28.675473914 +0000 UTC Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.354668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.354731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.354747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.354968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.354992 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.457066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.457116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.457133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.457184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.457204 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.559405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.559472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.559483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.559530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.559543 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.662250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.662296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.662305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.662321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.662331 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.765020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.765097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.765119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.765149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.765214 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.868715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.868779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.868797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.868819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.868836 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.971473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.971536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.971559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.971581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:46 crc kubenswrapper[4730]: I0202 07:28:46.971598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:46Z","lastTransitionTime":"2026-02-02T07:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.074405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.074474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.074491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.074517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.074536 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.177431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.177495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.177507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.177525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.177537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.252322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.252374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.252322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:47 crc kubenswrapper[4730]: E0202 07:28:47.252458 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:47 crc kubenswrapper[4730]: E0202 07:28:47.252678 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:47 crc kubenswrapper[4730]: E0202 07:28:47.252752 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.270907 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee68b89df02b7584b3f0cad83bc9a0ee97fb01597cb0eb953860fd7857767388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05d08a1aef9cc42eea97348b344ef2171562935dd0c6a6adecfa0f46f568b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.280825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.280881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.280903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.280931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.280955 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.288751 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.295061 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:18:12.800796714 +0000 UTC Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.309540 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"736a9d82-2671-4b6b-a9f2-2488de13b521\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba95321744318622355dab6a2e943a05f6c65365d37cf8783345c5b7dde3c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b212c6d3621c86cdfbcff744d022ccfd926d045c7ac42fe246e33cd20f03d0e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e455186b99b7d55307fec29829ca5d24b94512ad7042e4571c6639ad8cfa4895\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df595ba336a574be06f9d47f67b5d5bbe3adaa2d802798bf3701279011dcc1ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b334eaa4c85b160b5302ad20f4cc5633c55797252dd4baaca001d146af79f5a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc107aa76d568bd3704ff3ab4291ccd7dea79e8a9138e69d99fcd1895298c318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e6978368d51b59d8ad4bdf62b89b126639edb441ea7d02fbae5398be29f5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqqdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.338764 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdaf8a78-eb6a-4585-92c9-c209858834e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbe3613dd21a37327de97a9dc409fb91e906e5027329b2ef9a2a18a997b563f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81eb9712204e71bf6cb95987dff48f5519ca3bc03e5fba3d9c7b3c09028e79ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba20a237997b7d208e8676716e83196faed32cd3617534734b5b7d7a957805a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015ce6b77dc828689d6a21079d4f3b377af6e26111ab5a0d6460c5364df14bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595d47883ee90602f1f73241ec8d6c3c63d8635b4a097cd1b84121316a7e8a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d2f45cbef6e01232ac18b16f4a33e17322f16b83d15425e547cb63f4f4277f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92d2f45cbef6e01232ac18b16f4a33e17322f16b83d15425e547cb63f4f4277f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14de217bbf35067e9c677e08a8bdd93831456597f9a1fd2a7ca9383c06057c82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14de217bbf35067e9c677e08a8bdd93831456597f9a1fd2a7ca9383c06057c82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://221ef41c281b2c0b07c336e6f0cf0f905ae988bbd666d72144ead27449d4f7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221ef41c281b2c0b07c336e6f0cf0f905ae988bbd666d72144ead27449d4f7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.362797 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6bc53a5-2daa-4979-9368-c76de4c468e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T07:27:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 07:27:20.754121 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 07:27:20.756206 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4021835604/tls.crt::/tmp/serving-cert-4021835604/tls.key\\\\\\\"\\\\nI0202 07:27:26.670706 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 07:27:26.675061 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 07:27:26.675094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 07:27:26.675137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 07:27:26.675149 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 07:27:26.689028 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 07:27:26.689065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689110 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 07:27:26.689127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 07:27:26.689134 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 07:27:26.689142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 07:27:26.689193 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 07:27:26.689238 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 07:27:26.690702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.384306 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7b0b429264051a5e7f262c2965c9738017f1f73bd709a8b20591435b2ab9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.384628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.384824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.384859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.384948 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.385027 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.396050 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pf2vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75645fc5-304c-488e-830a-4564f86f3fae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af724a177941979ce347c325452241bb40c8ded77275c2d992f40f645d56fc4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pf2vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.407801 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cde55f-e8c2-493e-82b6-a3b4a839366b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2905ba29cb7882b6d930e73f575c2ab3c5e0c8109f83410ff031fa36a495e71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dztbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ghs2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.422136 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zp8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b75ed7-302d-4f21-9c20-6ecab241b7b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:15Z\\\",\\\"message\\\":\\\"2026-02-02T07:27:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed\\\\n2026-02-02T07:27:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b62c5375-79eb-4314-8e2c-d1cf010a28ed to /host/opt/cni/bin/\\\\n2026-02-02T07:27:30Z [verbose] multus-daemon started\\\\n2026-02-02T07:27:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T07:28:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbxn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zp8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.436875 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"590f0b91-13e8-4a5b-9422-7ea0707b10d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567d92dae474915c1fdf72e8b7c8198768a6388bacd53db37f6943bedc6783be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://559fe8cadbdd62661725c76ef1a32e2c3e0ef4e10ccd72281308e919943c9d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4dtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l7ljz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.448403 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381d926c-779d-4f44-8145-80e994fd7e02\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5359afffc6f8581ade91768d6db7cfc13fec7245a3d5c01fb0815948e4619b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a32e681db3e8d84fd16ec855e10007247bcad255effc0263781a825017166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.461921 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"083ef10f-a991-413d-aaf5-722184dce6ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76e0c6cc9321a7ad995ec7d1c1cd48238e8137e8d7639b27ae4d7255736e0129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cd9f8192bffd30d4adece937feabb05667ea49795b44f1c202e44009bc1f515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113fecede137e70f35152e1527d1fc45c15fc86c8ae1c8e2030bf328d37bbe52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.475239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.488153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.488233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.488250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.488273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.488291 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.489690 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7f82ce-4e36-4150-a28d-365fcac970c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16a25ddf37b20b6110843abca3c4baf4f7305e37f3551eefc1c3709cc6aa639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b328531cb1561a3cc7854cdf8c5bd439cc4e1cce7679b8480766f34b75d60163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b440e0d6a01415d7a830a44fdbbc2d9e3663602b96ecf7ec60981ff3430a5812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3371e38f59058733b1a8b416ca6fb54366922bc763fdc70c30964ea2444308e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.510257 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.527995 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b77f49355f33e80f3493fb2f05b3052117ea9fbe05382ced9d4d3314b52f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.541680 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e46a89bb-4594-410c-8da8-6935fa870ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c06e687a595a8519ffe2881eccaeaf284b349a1f085122b7f02c8fab099fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8xrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.577639 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba7d1b84-4596-463a-bc77-c365c3c969b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T07:28:28Z\\\",\\\"message\\\":\\\"ft-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 07:28:28.200411 6785 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:28Z is after 2025-08-24T17:21:41Z]\\\\nI0202 07:28:28.200429 6785 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T07:28:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T07:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T07:27:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T07:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dbxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54z89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.590304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.590349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.590361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.590378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.590392 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.594970 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrjth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T07:27:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98k2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T07:27:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrjth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T07:28:47Z is after 2025-08-24T17:21:41Z" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.693144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.693221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.693232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.693248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.693259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.796745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.796798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.796817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.796840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.796856 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.899818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.899886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.899904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.899932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:47 crc kubenswrapper[4730]: I0202 07:28:47.899950 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:47Z","lastTransitionTime":"2026-02-02T07:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.002649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.002723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.002785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.002818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.002839 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.105480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.105521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.105532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.105548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.105558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.208704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.208746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.208759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.208773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.208784 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.252746 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:48 crc kubenswrapper[4730]: E0202 07:28:48.252891 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.296132 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:16:50.196927554 +0000 UTC Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.311416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.311475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.311495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.311526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.311550 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.414922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.414985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.414998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.415024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.415043 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.517990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.518114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.518308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.518419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.518611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.622212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.622294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.622317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.622348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.622373 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.724739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.724789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.724802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.724819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.724833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.826880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.826931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.826945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.826962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.826974 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.929685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.929726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.929735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.929749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:48 crc kubenswrapper[4730]: I0202 07:28:48.929758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:48Z","lastTransitionTime":"2026-02-02T07:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.031862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.031904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.031915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.031933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.031945 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.135118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.135182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.135196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.135241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.135254 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.237709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.237773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.237791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.237817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.237837 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.252250 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.252306 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.252316 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:49 crc kubenswrapper[4730]: E0202 07:28:49.252422 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:49 crc kubenswrapper[4730]: E0202 07:28:49.252506 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:49 crc kubenswrapper[4730]: E0202 07:28:49.252765 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.296914 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:13:00.365569557 +0000 UTC Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.339959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.340020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.340038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.340063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.340083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.442695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.442731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.442742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.442758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.442769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.545958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.545996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.546007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.546024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.546034 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.649097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.649135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.649146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.649176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.649186 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.752189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.752222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.752233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.752248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.752259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.856974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.857057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.857080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.857107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.857134 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.960465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.960526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.960542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.960565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:49 crc kubenswrapper[4730]: I0202 07:28:49.960582 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:49Z","lastTransitionTime":"2026-02-02T07:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.063483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.063543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.063561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.063584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.063602 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.166337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.166381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.166391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.166417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.166430 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.252611 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:50 crc kubenswrapper[4730]: E0202 07:28:50.252898 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.268062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.268113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.268129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.268150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.268226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.298016 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:14:45.377612336 +0000 UTC Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.370941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.370986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.370996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.371010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.371020 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.474605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.474655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.474672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.474695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.474713 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.577755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.577817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.577835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.577857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.577873 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.680981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.681038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.681060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.681088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.681110 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.784146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.784246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.784266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.784287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.784304 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.887110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.887225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.887248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.887274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.887292 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.989889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.989956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.989978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.990006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:50 crc kubenswrapper[4730]: I0202 07:28:50.990028 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:50Z","lastTransitionTime":"2026-02-02T07:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.093761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.093809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.093821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.093839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.093853 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.196949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.196996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.197012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.197036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.197071 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.252904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.252915 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:51 crc kubenswrapper[4730]: E0202 07:28:51.253113 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.253200 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:51 crc kubenswrapper[4730]: E0202 07:28:51.253398 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:51 crc kubenswrapper[4730]: E0202 07:28:51.253545 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.298211 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:43:48.565248947 +0000 UTC Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.299433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.299483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.299499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.299522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.299537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.401721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.401794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.401815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.401844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.401864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.504313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.504386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.504413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.504443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.504464 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.606959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.607020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.607037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.607061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.607079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.709824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.709888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.709912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.709941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.709965 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.812379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.812429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.812448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.812470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.812488 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.915333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.915388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.915404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.915430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:51 crc kubenswrapper[4730]: I0202 07:28:51.915447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:51Z","lastTransitionTime":"2026-02-02T07:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.017763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.017819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.017838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.017865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.017882 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.121112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.121224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.121248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.121277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.121299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.223836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.223870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.223879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.223892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.223903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.252314 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:52 crc kubenswrapper[4730]: E0202 07:28:52.252741 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.298855 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:23:36.696590648 +0000 UTC Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.328346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.328409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.328420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.328438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.328453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.430859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.430919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.430937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.430967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.430986 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.534377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.534447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.534462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.534485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.534502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.636835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.636886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.636899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.636918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.636931 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.739761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.739805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.739813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.739827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.739835 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.842329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.842368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.842380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.842424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.842438 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.945712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.945782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.945804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.945835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:52 crc kubenswrapper[4730]: I0202 07:28:52.945858 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:52Z","lastTransitionTime":"2026-02-02T07:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.048973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.049022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.049041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.049065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.049084 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.152298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.152367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.152386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.152448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.152472 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.252994 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.253013 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:53 crc kubenswrapper[4730]: E0202 07:28:53.253251 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.253352 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:53 crc kubenswrapper[4730]: E0202 07:28:53.253637 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:53 crc kubenswrapper[4730]: E0202 07:28:53.253848 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.254898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.254928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.254938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.254953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.254966 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.299140 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:16:27.394433789 +0000 UTC Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.357610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.357660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.357672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.357689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.357700 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.460129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.460206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.460220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.460243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.460255 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.563156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.563233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.563244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.563262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.563273 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.665588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.665648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.665666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.665690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.665709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.768044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.768097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.768113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.768188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.768206 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.870372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.870437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.870455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.870480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.870500 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.972605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.972655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.972665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.972684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:53 crc kubenswrapper[4730]: I0202 07:28:53.972699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:53Z","lastTransitionTime":"2026-02-02T07:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.075015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.075077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.075093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.075117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.075135 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.178804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.178867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.178884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.178908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.178926 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.252947 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:54 crc kubenswrapper[4730]: E0202 07:28:54.253701 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.281551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.281581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.281590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.281603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.281614 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.301236 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:36:09.37416371 +0000 UTC Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.384482 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.384530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.384542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.384560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.384573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.487763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.487830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.487850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.487871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.487888 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.590847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.590886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.590897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.590916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.590929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.694370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.694450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.694466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.694499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.694523 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.798705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.798762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.798779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.798803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.798820 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.902636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.902717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.902739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.902769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:54 crc kubenswrapper[4730]: I0202 07:28:54.902797 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:54Z","lastTransitionTime":"2026-02-02T07:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.005539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.005601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.005620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.005651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.005676 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.109360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.109424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.109439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.109462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.109479 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.219529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.219601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.219621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.219679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.219784 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.252926 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:55 crc kubenswrapper[4730]: E0202 07:28:55.253067 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.253349 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:55 crc kubenswrapper[4730]: E0202 07:28:55.253429 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.253574 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:55 crc kubenswrapper[4730]: E0202 07:28:55.253867 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.301396 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:37:09.616118777 +0000 UTC Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.323239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.323299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.323316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.323340 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.323361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.426627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.426693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.426709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.426734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.426753 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.529018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.529079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.529096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.529119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.529138 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.541104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.541225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.541252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.541283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.541305 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T07:28:55Z","lastTransitionTime":"2026-02-02T07:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.605936 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k"] Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.606501 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.611030 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.611103 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.611050 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.611506 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.668045 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podStartSLOduration=88.668011307 podStartE2EDuration="1m28.668011307s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.666941459 +0000 UTC m=+109.088144847" watchObservedRunningTime="2026-02-02 07:28:55.668011307 +0000 UTC m=+109.089214705" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.668604 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pf2vl" podStartSLOduration=88.668591332 podStartE2EDuration="1m28.668591332s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.648707003 +0000 UTC m=+109.069910401" watchObservedRunningTime="2026-02-02 07:28:55.668591332 +0000 UTC m=+109.089794720" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.691653 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zp8tp" podStartSLOduration=88.691626763 podStartE2EDuration="1m28.691626763s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.691461558 +0000 UTC m=+109.112664986" watchObservedRunningTime="2026-02-02 07:28:55.691626763 +0000 UTC m=+109.112830151" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.711590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.711654 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81566aff-e608-4c06-a452-a0517fc7428d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.711693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.711829 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81566aff-e608-4c06-a452-a0517fc7428d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.712031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81566aff-e608-4c06-a452-a0517fc7428d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.712860 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l7ljz" podStartSLOduration=87.712841506 podStartE2EDuration="1m27.712841506s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.712376224 +0000 UTC m=+109.133579602" watchObservedRunningTime="2026-02-02 07:28:55.712841506 +0000 UTC m=+109.134044894" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.754328 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.754305488 podStartE2EDuration="17.754305488s" podCreationTimestamp="2026-02-02 07:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.754225416 +0000 UTC m=+109.175428814" watchObservedRunningTime="2026-02-02 07:28:55.754305488 +0000 UTC m=+109.175508866" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.787699 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.787675618 podStartE2EDuration="1m28.787675618s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.786133868 +0000 UTC m=+109.207337246" watchObservedRunningTime="2026-02-02 07:28:55.787675618 +0000 UTC m=+109.208879006" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.812974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81566aff-e608-4c06-a452-a0517fc7428d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81566aff-e608-4c06-a452-a0517fc7428d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813146 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813253 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81566aff-e608-4c06-a452-a0517fc7428d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.813288 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/81566aff-e608-4c06-a452-a0517fc7428d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.814657 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81566aff-e608-4c06-a452-a0517fc7428d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.821036 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81566aff-e608-4c06-a452-a0517fc7428d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.831971 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.831941713 podStartE2EDuration="20.831941713s" podCreationTimestamp="2026-02-02 07:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.83143717 +0000 UTC m=+109.252640558" watchObservedRunningTime="2026-02-02 07:28:55.831941713 +0000 UTC m=+109.253145111" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.848225 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.848207518 podStartE2EDuration="1m24.848207518s" podCreationTimestamp="2026-02-02 07:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.848028923 +0000 UTC m=+109.269232311" watchObservedRunningTime="2026-02-02 07:28:55.848207518 +0000 UTC m=+109.269410876" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.849885 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81566aff-e608-4c06-a452-a0517fc7428d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rlb8k\" (UID: \"81566aff-e608-4c06-a452-a0517fc7428d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.881084 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-82v75" podStartSLOduration=88.881064165 podStartE2EDuration="1m28.881064165s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.88049694 +0000 UTC m=+109.301700298" watchObservedRunningTime="2026-02-02 07:28:55.881064165 +0000 UTC m=+109.302267533" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.927371 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.971706 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.971683979 podStartE2EDuration="1m1.971683979s" podCreationTimestamp="2026-02-02 07:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.956856222 +0000 UTC m=+109.378059590" watchObservedRunningTime="2026-02-02 07:28:55.971683979 +0000 UTC m=+109.392887337" Feb 02 07:28:55 crc kubenswrapper[4730]: I0202 07:28:55.993132 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z7nht" podStartSLOduration=88.993114948 podStartE2EDuration="1m28.993114948s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:55.991857105 +0000 UTC m=+109.413060463" watchObservedRunningTime="2026-02-02 07:28:55.993114948 +0000 UTC m=+109.414318316" Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.252481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:56 crc kubenswrapper[4730]: E0202 07:28:56.252701 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.302328 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:36:13.938857387 +0000 UTC Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.302446 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.312238 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.781748 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" event={"ID":"81566aff-e608-4c06-a452-a0517fc7428d","Type":"ContainerStarted","Data":"2af2e475e00fc4ebf7b778c8d1a34397b183aa8ac0d621a427a0973c8a69026b"} Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.781812 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" event={"ID":"81566aff-e608-4c06-a452-a0517fc7428d","Type":"ContainerStarted","Data":"291aa6f78b0bfed42f7ea92deea869584165a4ecb0e5d54c96af3541b82310c2"} Feb 02 07:28:56 crc kubenswrapper[4730]: I0202 07:28:56.795690 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rlb8k" podStartSLOduration=89.795667093 podStartE2EDuration="1m29.795667093s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:28:56.795335214 +0000 UTC m=+110.216538562" watchObservedRunningTime="2026-02-02 07:28:56.795667093 +0000 UTC m=+110.216870471" Feb 02 07:28:57 crc kubenswrapper[4730]: I0202 07:28:57.252277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:57 crc kubenswrapper[4730]: I0202 07:28:57.252326 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:57 crc kubenswrapper[4730]: I0202 07:28:57.252447 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:57 crc kubenswrapper[4730]: E0202 07:28:57.254092 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:57 crc kubenswrapper[4730]: E0202 07:28:57.254262 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:28:57 crc kubenswrapper[4730]: E0202 07:28:57.254670 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:57 crc kubenswrapper[4730]: I0202 07:28:57.254962 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:28:57 crc kubenswrapper[4730]: E0202 07:28:57.255141 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54z89_openshift-ovn-kubernetes(ba7d1b84-4596-463a-bc77-c365c3c969b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" Feb 02 07:28:58 crc kubenswrapper[4730]: I0202 07:28:58.252065 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:28:58 crc kubenswrapper[4730]: E0202 07:28:58.252492 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:28:59 crc kubenswrapper[4730]: I0202 07:28:59.252818 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:28:59 crc kubenswrapper[4730]: I0202 07:28:59.252873 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:28:59 crc kubenswrapper[4730]: E0202 07:28:59.252975 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:28:59 crc kubenswrapper[4730]: I0202 07:28:59.253053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:28:59 crc kubenswrapper[4730]: E0202 07:28:59.253233 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:28:59 crc kubenswrapper[4730]: E0202 07:28:59.253308 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:00 crc kubenswrapper[4730]: I0202 07:29:00.252975 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:00 crc kubenswrapper[4730]: E0202 07:29:00.253192 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.252479 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:01 crc kubenswrapper[4730]: E0202 07:29:01.252648 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.252504 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.252737 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:01 crc kubenswrapper[4730]: E0202 07:29:01.252803 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:01 crc kubenswrapper[4730]: E0202 07:29:01.252880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.798387 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/1.log" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.798876 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/0.log" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.798950 4730 generic.go:334] "Generic (PLEG): container finished" podID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" containerID="2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57" exitCode=1 Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.798996 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerDied","Data":"2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57"} Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.799042 4730 scope.go:117] "RemoveContainer" containerID="2aefc48cedabba6aa314ff6ff14bdcda55b0011db9ec402edf00fec470568996" Feb 02 07:29:01 crc kubenswrapper[4730]: I0202 07:29:01.799685 4730 scope.go:117] "RemoveContainer" containerID="2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57" Feb 02 07:29:01 crc kubenswrapper[4730]: E0202 07:29:01.799992 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zp8tp_openshift-multus(00b75ed7-302d-4f21-9c20-6ecab241b7b4)\"" pod="openshift-multus/multus-zp8tp" podUID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" Feb 02 07:29:02 crc kubenswrapper[4730]: I0202 07:29:02.252125 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:02 crc kubenswrapper[4730]: E0202 07:29:02.252276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:02 crc kubenswrapper[4730]: I0202 07:29:02.805596 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/1.log" Feb 02 07:29:03 crc kubenswrapper[4730]: I0202 07:29:03.252050 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:03 crc kubenswrapper[4730]: I0202 07:29:03.252107 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:03 crc kubenswrapper[4730]: E0202 07:29:03.252192 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:03 crc kubenswrapper[4730]: I0202 07:29:03.252054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:03 crc kubenswrapper[4730]: E0202 07:29:03.252349 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:03 crc kubenswrapper[4730]: E0202 07:29:03.252568 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:04 crc kubenswrapper[4730]: I0202 07:29:04.252731 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:04 crc kubenswrapper[4730]: E0202 07:29:04.252870 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:05 crc kubenswrapper[4730]: I0202 07:29:05.252707 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:05 crc kubenswrapper[4730]: I0202 07:29:05.252786 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:05 crc kubenswrapper[4730]: I0202 07:29:05.252678 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:05 crc kubenswrapper[4730]: E0202 07:29:05.252869 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:05 crc kubenswrapper[4730]: E0202 07:29:05.253044 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:05 crc kubenswrapper[4730]: E0202 07:29:05.253326 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:06 crc kubenswrapper[4730]: I0202 07:29:06.252274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:06 crc kubenswrapper[4730]: E0202 07:29:06.252422 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:07 crc kubenswrapper[4730]: I0202 07:29:07.252148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:07 crc kubenswrapper[4730]: E0202 07:29:07.253254 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:07 crc kubenswrapper[4730]: I0202 07:29:07.253266 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:07 crc kubenswrapper[4730]: E0202 07:29:07.253373 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:07 crc kubenswrapper[4730]: I0202 07:29:07.253694 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:07 crc kubenswrapper[4730]: E0202 07:29:07.254052 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:07 crc kubenswrapper[4730]: E0202 07:29:07.273114 4730 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 07:29:07 crc kubenswrapper[4730]: E0202 07:29:07.373418 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 07:29:08 crc kubenswrapper[4730]: I0202 07:29:08.252067 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:08 crc kubenswrapper[4730]: E0202 07:29:08.252304 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:08 crc kubenswrapper[4730]: I0202 07:29:08.253483 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:29:08 crc kubenswrapper[4730]: I0202 07:29:08.826386 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/3.log" Feb 02 07:29:08 crc kubenswrapper[4730]: I0202 07:29:08.829193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerStarted","Data":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} Feb 02 07:29:08 crc kubenswrapper[4730]: I0202 07:29:08.829566 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.171480 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podStartSLOduration=102.171443856 podStartE2EDuration="1m42.171443856s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:08.857005733 +0000 UTC m=+122.278209091" watchObservedRunningTime="2026-02-02 07:29:09.171443856 +0000 UTC m=+122.592647244" Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.173494 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrjth"] Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.173646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:09 crc kubenswrapper[4730]: E0202 07:29:09.173854 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.252634 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.252678 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:09 crc kubenswrapper[4730]: I0202 07:29:09.252697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:09 crc kubenswrapper[4730]: E0202 07:29:09.252799 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:09 crc kubenswrapper[4730]: E0202 07:29:09.252869 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:09 crc kubenswrapper[4730]: E0202 07:29:09.252924 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:11 crc kubenswrapper[4730]: I0202 07:29:11.253099 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:11 crc kubenswrapper[4730]: I0202 07:29:11.253099 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:11 crc kubenswrapper[4730]: I0202 07:29:11.253276 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:11 crc kubenswrapper[4730]: I0202 07:29:11.253375 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:11 crc kubenswrapper[4730]: E0202 07:29:11.253684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:11 crc kubenswrapper[4730]: E0202 07:29:11.254008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:11 crc kubenswrapper[4730]: E0202 07:29:11.254323 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:11 crc kubenswrapper[4730]: E0202 07:29:11.254439 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:12 crc kubenswrapper[4730]: E0202 07:29:12.374808 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.252592 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.252633 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.252726 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.252900 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:13 crc kubenswrapper[4730]: E0202 07:29:13.253008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:13 crc kubenswrapper[4730]: E0202 07:29:13.253073 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:13 crc kubenswrapper[4730]: E0202 07:29:13.253149 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:13 crc kubenswrapper[4730]: E0202 07:29:13.253337 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.253384 4730 scope.go:117] "RemoveContainer" containerID="2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.849355 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/1.log" Feb 02 07:29:13 crc kubenswrapper[4730]: I0202 07:29:13.849429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerStarted","Data":"2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74"} Feb 02 07:29:15 crc kubenswrapper[4730]: I0202 07:29:15.252021 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:15 crc kubenswrapper[4730]: E0202 07:29:15.252236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:15 crc kubenswrapper[4730]: I0202 07:29:15.252302 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:15 crc kubenswrapper[4730]: I0202 07:29:15.252339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:15 crc kubenswrapper[4730]: E0202 07:29:15.252480 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:15 crc kubenswrapper[4730]: E0202 07:29:15.252742 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:15 crc kubenswrapper[4730]: I0202 07:29:15.253275 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:15 crc kubenswrapper[4730]: E0202 07:29:15.253364 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:17 crc kubenswrapper[4730]: I0202 07:29:17.252410 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:17 crc kubenswrapper[4730]: I0202 07:29:17.252460 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:17 crc kubenswrapper[4730]: I0202 07:29:17.252425 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:17 crc kubenswrapper[4730]: I0202 07:29:17.252506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:17 crc kubenswrapper[4730]: E0202 07:29:17.253439 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrjth" podUID="f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc" Feb 02 07:29:17 crc kubenswrapper[4730]: E0202 07:29:17.253567 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 07:29:17 crc kubenswrapper[4730]: E0202 07:29:17.253636 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 07:29:17 crc kubenswrapper[4730]: E0202 07:29:17.253742 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 07:29:18 crc kubenswrapper[4730]: I0202 07:29:18.415150 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.252953 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.253013 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.253218 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.253230 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.256101 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.256558 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.256563 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.256759 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.257295 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 07:29:19 crc kubenswrapper[4730]: I0202 07:29:19.257429 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.260492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.304460 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.305157 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.306890 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mcz6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.310204 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.322288 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.323371 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.324309 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tmxn6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.325080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.325329 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.327244 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xtp9g"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.327661 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339283 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339464 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339739 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339876 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339929 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.339952 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340394 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340489 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340565 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340585 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340714 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340770 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340830 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340888 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340960 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.340982 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341056 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341082 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341177 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341215 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341317 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.341342 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.342532 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.343044 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.343344 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.343629 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.344052 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.344673 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.346689 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.347037 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.347502 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gnl7r"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.350511 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.350651 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.351518 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.351630 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.351723 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.351814 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.352806 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.353204 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.353320 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.353870 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.353993 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.354332 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.354358 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.354952 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-r8rf5"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.355488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.355934 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.356274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.357049 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.382404 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.384601 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.385270 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.412975 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tqvrx"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.413442 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m55dz"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.413958 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.414264 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.415038 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-229kn"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.415358 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.415628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.415646 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.415799 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.416023 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.420396 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.420663 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.420700 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.420830 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.421332 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.421541 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.421664 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.421846 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422103 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422185 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422253 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422348 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422458 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422557 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422571 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422591 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422636 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422709 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422722 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422787 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422861 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422895 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422988 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423042 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423073 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423103 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423148 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423280 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423363 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423370 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423333 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423439 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423444 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423527 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423533 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423560 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.422989 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423607 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423637 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423687 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423702 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423717 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423731 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423776 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423792 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-468rj"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423815 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423826 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423938 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.423974 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424074 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424091 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424286 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424324 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424462 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424564 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424586 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424689 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424704 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.424781 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.426032 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.426209 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.427464 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.428306 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.430996 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.431546 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.432778 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.440993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.442438 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443015 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-audit-dir\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443072 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443094 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eca2f38-c23b-4874-b4a1-b57bafd24604-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443122 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-config\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443139 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443157 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443190 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-audit\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443209 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-client\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443240 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443257 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a89462-5b24-4924-a8a7-497b23f341e9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhgw\" (UniqueName: \"kubernetes.io/projected/d3052bac-69ea-478e-963c-3951dd878ac2-kube-api-access-7lhgw\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-service-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443327 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-image-import-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443343 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-dir\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443359 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bst2m\" (UniqueName: \"kubernetes.io/projected/29a89462-5b24-4924-a8a7-497b23f341e9-kube-api-access-bst2m\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443380 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkxpm\" (UniqueName: \"kubernetes.io/projected/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-kube-api-access-dkxpm\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443418 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-config\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443439 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-serving-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443458 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-auth-proxy-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443473 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443489 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443507 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65jd\" (UniqueName: \"kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443526 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-images\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-node-pullsecrets\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443580 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t82\" (UniqueName: \"kubernetes.io/projected/23885584-d53a-44da-879e-9d359c726f2c-kube-api-access-h8t82\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443601 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-config\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443643 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3052bac-69ea-478e-963c-3951dd878ac2-machine-approver-tls\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443664 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3981d83c-cd31-4d75-b3b3-0b087c28a16c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhtc\" (UniqueName: \"kubernetes.io/projected/db31bea7-e8e5-4390-8e72-fb8871151dd5-kube-api-access-xmhtc\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443708 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443745 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgcch\" (UniqueName: \"kubernetes.io/projected/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-kube-api-access-lgcch\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443761 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jr2m\" (UniqueName: \"kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443777 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443799 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2gx\" (UniqueName: \"kubernetes.io/projected/7eca2f38-c23b-4874-b4a1-b57bafd24604-kube-api-access-hs2gx\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443815 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drw9\" (UniqueName: \"kubernetes.io/projected/3981d83c-cd31-4d75-b3b3-0b087c28a16c-kube-api-access-8drw9\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443832 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443849 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwphf\" (UniqueName: \"kubernetes.io/projected/8e988b50-280e-49d0-b7d2-ae606685dc16-kube-api-access-rwphf\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443866 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443885 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443923 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-serving-cert\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443942 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-encryption-config\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.443988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444020 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444055 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444072 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444087 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23885584-d53a-44da-879e-9d359c726f2c-serving-cert\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444105 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-policies\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444123 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444139 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-client\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444156 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-serving-cert\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444193 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444209 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444226 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444259 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkr42\" (UniqueName: \"kubernetes.io/projected/00a8df73-2822-496c-8b52-435531e7cbf7-kube-api-access-qkr42\") pod \"downloads-7954f5f757-r8rf5\" (UID: \"00a8df73-2822-496c-8b52-435531e7cbf7\") " pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a89462-5b24-4924-a8a7-497b23f341e9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-encryption-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-trusted-ca\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db31bea7-e8e5-4390-8e72-fb8871151dd5-serving-cert\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.444339 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3981d83c-cd31-4d75-b3b3-0b087c28a16c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.445591 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wxm94"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.446276 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.446577 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.446668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.446690 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.455634 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.465398 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.465702 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.468049 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.474397 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.475000 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.475127 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.475297 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.476962 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.477445 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.478052 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.478153 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.478392 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.483426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.485120 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pk5cc"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.485798 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.485973 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.486262 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.488022 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mcz6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.488050 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.488605 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.491943 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.494833 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.496529 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.497282 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.505224 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.505939 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.506311 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.506972 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.508131 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.513676 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.515616 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.515757 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.516318 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.516522 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.517134 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.517833 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5h45"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.518377 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.518626 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.521665 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xtp9g"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.525372 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tmxn6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.526493 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.529829 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2ls8j"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.530417 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.530433 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gnl7r"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.530512 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.532531 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.534608 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.534633 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.536537 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.536692 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m55dz"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.536718 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wxm94"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.539130 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r8rf5"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.539269 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.541807 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.541849 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.542402 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.543845 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.544754 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-trusted-ca-bundle\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545297 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-serving-cert\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85e25636-c407-451f-8176-f15ca7097a97-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545482 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-serving-cert\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545570 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545651 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmg5t\" (UniqueName: \"kubernetes.io/projected/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-kube-api-access-qmg5t\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545724 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-srv-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545801 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545884 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.545980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/180061e1-4a0a-4a44-b6b0-5e38c20d4427-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546065 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqzh\" (UniqueName: \"kubernetes.io/projected/9203a95c-f4a4-449d-9f1a-d44338c975e7-kube-api-access-4bqzh\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546153 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546281 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546443 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-client\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546528 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546607 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcac256f-ab91-4849-b215-dcf74506d0d2-proxy-tls\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkr42\" (UniqueName: \"kubernetes.io/projected/00a8df73-2822-496c-8b52-435531e7cbf7-kube-api-access-qkr42\") pod \"downloads-7954f5f757-r8rf5\" (UID: \"00a8df73-2822-496c-8b52-435531e7cbf7\") " pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546773 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3981d83c-cd31-4d75-b3b3-0b087c28a16c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546861 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db31bea7-e8e5-4390-8e72-fb8871151dd5-serving-cert\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.546938 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-audit-dir\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547024 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547102 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6m5v\" (UniqueName: \"kubernetes.io/projected/eaa8ebca-b85f-4719-9b09-0f39ea039f24-kube-api-access-b6m5v\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547196 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eca2f38-c23b-4874-b4a1-b57bafd24604-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547292 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547383 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-oauth-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547485 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkqt\" (UniqueName: \"kubernetes.io/projected/85e25636-c407-451f-8176-f15ca7097a97-kube-api-access-2hkqt\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547585 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-audit\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-client\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547766 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-client\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547861 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tqvrx"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.547893 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pk5cc"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.548128 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.549144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-audit\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.549308 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.549770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a89462-5b24-4924-a8a7-497b23f341e9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552746 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llc55\" (UniqueName: \"kubernetes.io/projected/5212b3b6-8f0a-47b3-9814-b093c275e32d-kube-api-access-llc55\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkxpm\" (UniqueName: \"kubernetes.io/projected/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-kube-api-access-dkxpm\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552789 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-image-import-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552808 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-dir\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552825 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst2m\" (UniqueName: \"kubernetes.io/projected/29a89462-5b24-4924-a8a7-497b23f341e9-kube-api-access-bst2m\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552841 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84477ff-bcf6-4967-9052-df8ffa8e0003-config\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-client\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.550432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a89462-5b24-4924-a8a7-497b23f341e9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.550947 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-audit-dir\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.550999 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.553018 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.551859 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eca2f38-c23b-4874-b4a1-b57bafd24604-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552579 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.553604 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5h45"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.551715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db31bea7-e8e5-4390-8e72-fb8871151dd5-serving-cert\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.553704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-image-import-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.551389 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.552858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554007 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-service-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554085 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpm6w\" (UniqueName: \"kubernetes.io/projected/180061e1-4a0a-4a44-b6b0-5e38c20d4427-kube-api-access-cpm6w\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65jd\" (UniqueName: \"kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554313 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554541 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-service-ca-bundle\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3052bac-69ea-478e-963c-3951dd878ac2-machine-approver-tls\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b610d0-f480-4bb9-80eb-919d3301ffd4-trusted-ca\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555284 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3981d83c-cd31-4d75-b3b3-0b087c28a16c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555385 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.553710 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-dir\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554905 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555243 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.553944 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.554863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555695 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmhtc\" (UniqueName: \"kubernetes.io/projected/db31bea7-e8e5-4390-8e72-fb8871151dd5-kube-api-access-xmhtc\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555802 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555896 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555987 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnnrv\" (UniqueName: \"kubernetes.io/projected/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-kube-api-access-nnnrv\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555826 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2r4hv"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556364 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556509 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556597 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwphf\" (UniqueName: \"kubernetes.io/projected/8e988b50-280e-49d0-b7d2-ae606685dc16-kube-api-access-rwphf\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556694 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556779 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-config\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca56223c-bd37-4732-90ed-5b714bf35831-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556946 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557099 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da86c860-a495-4d5f-8084-32c64a497e52-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557358 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc6091c-8d00-44c2-91ce-3f7b568bf355-config\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557465 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557545 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49b610d0-f480-4bb9-80eb-919d3301ffd4-metrics-tls\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557649 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-encryption-config\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558311 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558429 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557048 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wbkch"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.556812 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557228 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555717 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-client\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.557597 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3052bac-69ea-478e-963c-3951dd878ac2-machine-approver-tls\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558370 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558431 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b84477ff-bcf6-4967-9052-df8ffa8e0003-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qc66\" (UniqueName: \"kubernetes.io/projected/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-kube-api-access-8qc66\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558809 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bztq\" (UniqueName: \"kubernetes.io/projected/95763bc4-bfd7-4afe-8a38-22770288a195-kube-api-access-4bztq\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558831 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-policies\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23885584-d53a-44da-879e-9d359c726f2c-serving-cert\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-stats-auth\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558886 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc6091c-8d00-44c2-91ce-3f7b568bf355-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558903 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-profile-collector-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558938 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-metrics-certs\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558954 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da86c860-a495-4d5f-8084-32c64a497e52-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558976 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da86c860-a495-4d5f-8084-32c64a497e52-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.558993 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-serving-cert\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559024 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559059 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a89462-5b24-4924-a8a7-497b23f341e9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559078 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-encryption-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559094 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-trusted-ca\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.555811 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3981d83c-cd31-4d75-b3b3-0b087c28a16c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.564770 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565060 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-encryption-config\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565536 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565548 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565703 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-trusted-ca\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.559137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdncf\" (UniqueName: \"kubernetes.io/projected/dcac256f-ab91-4849-b215-dcf74506d0d2-kube-api-access-hdncf\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565754 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3981d83c-cd31-4d75-b3b3-0b087c28a16c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565779 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.565813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-config\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.566095 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.567719 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.567767 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-audit-policies\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.567774 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.567902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-default-certificate\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.568631 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/180061e1-4a0a-4a44-b6b0-5e38c20d4427-serving-cert\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.568771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.568924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569130 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8gr\" (UniqueName: \"kubernetes.io/projected/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-kube-api-access-sb8gr\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569286 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84477ff-bcf6-4967-9052-df8ffa8e0003-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhgw\" (UniqueName: \"kubernetes.io/projected/d3052bac-69ea-478e-963c-3951dd878ac2-kube-api-access-7lhgw\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-service-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569661 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-config\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569752 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-serving-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569849 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569938 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.569941 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-auth-proxy-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570050 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570097 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-oauth-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-images\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570186 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-node-pullsecrets\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570233 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ccmd\" (UniqueName: \"kubernetes.io/projected/f7dc234b-4559-460c-a4fe-85cedc72c368-kube-api-access-5ccmd\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570262 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8t82\" (UniqueName: \"kubernetes.io/projected/23885584-d53a-44da-879e-9d359c726f2c-kube-api-access-h8t82\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-config\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570316 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fc6091c-8d00-44c2-91ce-3f7b568bf355-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-metrics-tls\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570362 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca56223c-bd37-4732-90ed-5b714bf35831-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570386 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcac256f-ab91-4849-b215-dcf74506d0d2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570419 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jr2m\" (UniqueName: \"kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570438 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgcch\" (UniqueName: \"kubernetes.io/projected/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-kube-api-access-lgcch\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570460 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdr6w\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-kube-api-access-rdr6w\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2gx\" (UniqueName: \"kubernetes.io/projected/7eca2f38-c23b-4874-b4a1-b57bafd24604-kube-api-access-hs2gx\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570502 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drw9\" (UniqueName: \"kubernetes.io/projected/3981d83c-cd31-4d75-b3b3-0b087c28a16c-kube-api-access-8drw9\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570563 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-service-ca\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570583 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-srv-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570608 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570628 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/95763bc4-bfd7-4afe-8a38-22770288a195-tmpfs\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.570648 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbss\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-kube-api-access-bnbss\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.571100 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.571303 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.571587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-serving-cert\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.568301 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a89462-5b24-4924-a8a7-497b23f341e9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.571800 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8e988b50-280e-49d0-b7d2-ae606685dc16-node-pullsecrets\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.572191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-images\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.572514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.568379 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.572779 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.573340 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23885584-d53a-44da-879e-9d359c726f2c-config\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.573942 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.574034 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-service-ca-bundle\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.574299 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3052bac-69ea-478e-963c-3951dd878ac2-auth-proxy-config\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.574427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8e988b50-280e-49d0-b7d2-ae606685dc16-etcd-serving-ca\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.576321 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eca2f38-c23b-4874-b4a1-b57bafd24604-config\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.576296 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db31bea7-e8e5-4390-8e72-fb8871151dd5-config\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.576664 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.576899 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.577812 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.578376 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-serving-cert\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.578432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8e988b50-280e-49d0-b7d2-ae606685dc16-encryption-config\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.578995 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579444 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wbkch"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579541 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579622 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579702 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-468rj"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579916 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.579406 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.580120 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.582014 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.582051 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23885584-d53a-44da-879e-9d359c726f2c-serving-cert\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.583281 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.584530 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2r4hv"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.585862 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.587313 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.587866 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.591748 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.593864 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6l8n"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.595427 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.595683 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6l8n"] Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.607760 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.627040 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.667154 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b84477ff-bcf6-4967-9052-df8ffa8e0003-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qc66\" (UniqueName: \"kubernetes.io/projected/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-kube-api-access-8qc66\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bztq\" (UniqueName: \"kubernetes.io/projected/95763bc4-bfd7-4afe-8a38-22770288a195-kube-api-access-4bztq\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-stats-auth\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671484 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc6091c-8d00-44c2-91ce-3f7b568bf355-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671507 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-profile-collector-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671529 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-metrics-certs\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da86c860-a495-4d5f-8084-32c64a497e52-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da86c860-a495-4d5f-8084-32c64a497e52-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdncf\" (UniqueName: \"kubernetes.io/projected/dcac256f-ab91-4849-b215-dcf74506d0d2-kube-api-access-hdncf\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671632 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-default-certificate\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671657 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/180061e1-4a0a-4a44-b6b0-5e38c20d4427-serving-cert\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84477ff-bcf6-4967-9052-df8ffa8e0003-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8gr\" (UniqueName: \"kubernetes.io/projected/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-kube-api-access-sb8gr\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671751 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671779 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671803 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-oauth-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671828 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ccmd\" (UniqueName: \"kubernetes.io/projected/f7dc234b-4559-460c-a4fe-85cedc72c368-kube-api-access-5ccmd\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fc6091c-8d00-44c2-91ce-3f7b568bf355-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-metrics-tls\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671906 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca56223c-bd37-4732-90ed-5b714bf35831-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcac256f-ab91-4849-b215-dcf74506d0d2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.671970 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdr6w\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-kube-api-access-rdr6w\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672004 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672025 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-service-ca\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-srv-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672264 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/95763bc4-bfd7-4afe-8a38-22770288a195-tmpfs\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672288 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbss\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-kube-api-access-bnbss\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672310 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-trusted-ca-bundle\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672330 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-serving-cert\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672352 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85e25636-c407-451f-8176-f15ca7097a97-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672380 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmg5t\" (UniqueName: \"kubernetes.io/projected/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-kube-api-access-qmg5t\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-srv-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672432 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672454 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/180061e1-4a0a-4a44-b6b0-5e38c20d4427-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672475 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqzh\" (UniqueName: \"kubernetes.io/projected/9203a95c-f4a4-449d-9f1a-d44338c975e7-kube-api-access-4bqzh\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672500 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcac256f-ab91-4849-b215-dcf74506d0d2-proxy-tls\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672564 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672585 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6m5v\" (UniqueName: \"kubernetes.io/projected/eaa8ebca-b85f-4719-9b09-0f39ea039f24-kube-api-access-b6m5v\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672616 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-oauth-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672638 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkqt\" (UniqueName: \"kubernetes.io/projected/85e25636-c407-451f-8176-f15ca7097a97-kube-api-access-2hkqt\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672787 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-client\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llc55\" (UniqueName: \"kubernetes.io/projected/5212b3b6-8f0a-47b3-9814-b093c275e32d-kube-api-access-llc55\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84477ff-bcf6-4967-9052-df8ffa8e0003-config\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672871 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-service-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672892 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpm6w\" (UniqueName: \"kubernetes.io/projected/180061e1-4a0a-4a44-b6b0-5e38c20d4427-kube-api-access-cpm6w\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672955 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-service-ca-bundle\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.672980 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b610d0-f480-4bb9-80eb-919d3301ffd4-trusted-ca\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnnrv\" (UniqueName: \"kubernetes.io/projected/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-kube-api-access-nnnrv\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673063 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673087 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673111 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca56223c-bd37-4732-90ed-5b714bf35831-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-config\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673186 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da86c860-a495-4d5f-8084-32c64a497e52-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc6091c-8d00-44c2-91ce-3f7b568bf355-config\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.673230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49b610d0-f480-4bb9-80eb-919d3301ffd4-metrics-tls\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.675453 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-stats-auth\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.675465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.676505 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/95763bc4-bfd7-4afe-8a38-22770288a195-tmpfs\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.676655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-oauth-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.677098 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-default-certificate\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.677895 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-service-ca-bundle\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.678101 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca56223c-bd37-4732-90ed-5b714bf35831-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.678397 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-trusted-ca-bundle\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.678588 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/180061e1-4a0a-4a44-b6b0-5e38c20d4427-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.679451 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca56223c-bd37-4732-90ed-5b714bf35831-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.679654 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-service-ca\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.680056 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dcac256f-ab91-4849-b215-dcf74506d0d2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.680666 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/180061e1-4a0a-4a44-b6b0-5e38c20d4427-serving-cert\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.681686 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-serving-cert\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.684601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-console-oauth-config\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.687936 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.689768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-metrics-certs\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.707425 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.711210 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-serving-cert\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.727009 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.730421 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-client\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.747080 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.747849 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-config\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.767361 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.775571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.787638 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.797543 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5212b3b6-8f0a-47b3-9814-b093c275e32d-etcd-service-ca\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.807822 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.827404 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.847546 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.868267 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.887012 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.900458 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-metrics-tls\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.907846 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.914504 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc6091c-8d00-44c2-91ce-3f7b568bf355-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.927884 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.947068 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.967291 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.971232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc6091c-8d00-44c2-91ce-3f7b568bf355-config\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:26 crc kubenswrapper[4730]: I0202 07:29:26.987457 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.008840 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.027983 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.038033 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49b610d0-f480-4bb9-80eb-919d3301ffd4-metrics-tls\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.055146 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.060467 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b610d0-f480-4bb9-80eb-919d3301ffd4-trusted-ca\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.068750 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.089728 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.108979 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.128786 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.135426 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da86c860-a495-4d5f-8084-32c64a497e52-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.147604 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.152775 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da86c860-a495-4d5f-8084-32c64a497e52-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.168379 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.188466 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.195306 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b84477ff-bcf6-4967-9052-df8ffa8e0003-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.208443 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.218760 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84477ff-bcf6-4967-9052-df8ffa8e0003-config\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.228574 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.248956 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.267943 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.288134 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.308041 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.327865 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.348274 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.368867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.388141 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.407601 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.428572 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.447619 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.467891 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.481059 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcac256f-ab91-4849-b215-dcf74506d0d2-proxy-tls\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.486485 4730 request.go:700] Waited for 1.000462142s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&limit=500&resourceVersion=0 Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.488128 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.507883 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.528100 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.538406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.547925 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.563845 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85e25636-c407-451f-8176-f15ca7097a97-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.578687 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.586369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-srv-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.588231 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.608219 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.616560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-profile-collector-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.623903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eaa8ebca-b85f-4719-9b09-0f39ea039f24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.628006 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.647949 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.668429 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.674046 4730 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.674149 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert podName:95763bc4-bfd7-4afe-8a38-22770288a195 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:28.174120546 +0000 UTC m=+141.595323924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert") pod "packageserver-d55dfcdfc-jg5m8" (UID: "95763bc4-bfd7-4afe-8a38-22770288a195") : failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.677267 4730 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.677346 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert podName:95763bc4-bfd7-4afe-8a38-22770288a195 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:28.177323411 +0000 UTC m=+141.598526759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert") pod "packageserver-d55dfcdfc-jg5m8" (UID: "95763bc4-bfd7-4afe-8a38-22770288a195") : failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.679045 4730 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: E0202 07:29:27.679117 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls podName:f7dc234b-4559-460c-a4fe-85cedc72c368 nodeName:}" failed. No retries permitted until 2026-02-02 07:29:28.179097788 +0000 UTC m=+141.600301176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-zjbtc" (UID: "f7dc234b-4559-460c-a4fe-85cedc72c368") : failed to sync secret cache: timed out waiting for the condition Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.681563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9203a95c-f4a4-449d-9f1a-d44338c975e7-srv-cert\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.688385 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.707687 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.727503 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.769384 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.788844 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.808593 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.828455 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.847687 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.868334 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.888688 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.908491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.936683 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.947725 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.967319 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 07:29:27 crc kubenswrapper[4730]: I0202 07:29:27.989599 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.009256 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.028413 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.049363 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.067877 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.087076 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.107526 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.127873 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.148671 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.183582 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkr42\" (UniqueName: \"kubernetes.io/projected/00a8df73-2822-496c-8b52-435531e7cbf7-kube-api-access-qkr42\") pod \"downloads-7954f5f757-r8rf5\" (UID: \"00a8df73-2822-496c-8b52-435531e7cbf7\") " pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.198525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.198719 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.198894 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.200950 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p\") pod \"oauth-openshift-558db77b4-4hw4w\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.202377 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-webhook-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.203300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95763bc4-bfd7-4afe-8a38-22770288a195-apiservice-cert\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.203484 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7dc234b-4559-460c-a4fe-85cedc72c368-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.223404 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkxpm\" (UniqueName: \"kubernetes.io/projected/b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb-kube-api-access-dkxpm\") pod \"cluster-samples-operator-665b6dd947-d5mlp\" (UID: \"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.242979 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst2m\" (UniqueName: \"kubernetes.io/projected/29a89462-5b24-4924-a8a7-497b23f341e9-kube-api-access-bst2m\") pod \"openshift-controller-manager-operator-756b6f6bc6-xrslf\" (UID: \"29a89462-5b24-4924-a8a7-497b23f341e9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.264599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65jd\" (UniqueName: \"kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd\") pod \"route-controller-manager-6576b87f9c-djvb6\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.283987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmhtc\" (UniqueName: \"kubernetes.io/projected/db31bea7-e8e5-4390-8e72-fb8871151dd5-kube-api-access-xmhtc\") pod \"authentication-operator-69f744f599-xtp9g\" (UID: \"db31bea7-e8e5-4390-8e72-fb8871151dd5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.301989 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.307975 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.310991 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwphf\" (UniqueName: \"kubernetes.io/projected/8e988b50-280e-49d0-b7d2-ae606685dc16-kube-api-access-rwphf\") pod \"apiserver-76f77b778f-gnl7r\" (UID: \"8e988b50-280e-49d0-b7d2-ae606685dc16\") " pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.328285 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.344762 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.352537 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.395586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jr2m\" (UniqueName: \"kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m\") pod \"controller-manager-879f6c89f-72nv7\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.403012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.406796 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgcch\" (UniqueName: \"kubernetes.io/projected/bb8e6c76-96fc-4cac-b3e5-98227cddfb06-kube-api-access-lgcch\") pod \"apiserver-7bbb656c7d-t46v8\" (UID: \"bb8e6c76-96fc-4cac-b3e5-98227cddfb06\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.410593 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.433779 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2gx\" (UniqueName: \"kubernetes.io/projected/7eca2f38-c23b-4874-b4a1-b57bafd24604-kube-api-access-hs2gx\") pod \"machine-api-operator-5694c8668f-tmxn6\" (UID: \"7eca2f38-c23b-4874-b4a1-b57bafd24604\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.438934 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.448701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drw9\" (UniqueName: \"kubernetes.io/projected/3981d83c-cd31-4d75-b3b3-0b087c28a16c-kube-api-access-8drw9\") pod \"openshift-apiserver-operator-796bbdcf4f-jvns9\" (UID: \"3981d83c-cd31-4d75-b3b3-0b087c28a16c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.464327 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8t82\" (UniqueName: \"kubernetes.io/projected/23885584-d53a-44da-879e-9d359c726f2c-kube-api-access-h8t82\") pod \"console-operator-58897d9998-6mcz6\" (UID: \"23885584-d53a-44da-879e-9d359c726f2c\") " pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.485552 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhgw\" (UniqueName: \"kubernetes.io/projected/d3052bac-69ea-478e-963c-3951dd878ac2-kube-api-access-7lhgw\") pod \"machine-approver-56656f9798-84q5s\" (UID: \"d3052bac-69ea-478e-963c-3951dd878ac2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.487006 4730 request.go:700] Waited for 1.906664485s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.489267 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.510185 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.517128 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.528966 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.530320 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.530816 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.548944 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.567225 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.567739 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.575547 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.590437 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.591235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.592551 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r8rf5"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.612862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.613067 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.628271 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.669990 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b84477ff-bcf6-4967-9052-df8ffa8e0003-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h5wzs\" (UID: \"b84477ff-bcf6-4967-9052-df8ffa8e0003\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.684446 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gnl7r"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.685332 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qc66\" (UniqueName: \"kubernetes.io/projected/ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b-kube-api-access-8qc66\") pod \"dns-operator-744455d44c-wxm94\" (UID: \"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.691804 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.701378 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bztq\" (UniqueName: \"kubernetes.io/projected/95763bc4-bfd7-4afe-8a38-22770288a195-kube-api-access-4bztq\") pod \"packageserver-d55dfcdfc-jg5m8\" (UID: \"95763bc4-bfd7-4afe-8a38-22770288a195\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.736218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8gr\" (UniqueName: \"kubernetes.io/projected/4994d05d-dfe5-42e1-81e9-8b4a09fb8934-kube-api-access-sb8gr\") pod \"console-f9d7485db-tqvrx\" (UID: \"4994d05d-dfe5-42e1-81e9-8b4a09fb8934\") " pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.749231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdncf\" (UniqueName: \"kubernetes.io/projected/dcac256f-ab91-4849-b215-dcf74506d0d2-kube-api-access-hdncf\") pod \"machine-config-controller-84d6567774-mgpdt\" (UID: \"dcac256f-ab91-4849-b215-dcf74506d0d2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.755380 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.768809 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.773976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ccmd\" (UniqueName: \"kubernetes.io/projected/f7dc234b-4559-460c-a4fe-85cedc72c368-kube-api-access-5ccmd\") pod \"control-plane-machine-set-operator-78cbb6b69f-zjbtc\" (UID: \"f7dc234b-4559-460c-a4fe-85cedc72c368\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.796064 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.803619 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpm6w\" (UniqueName: \"kubernetes.io/projected/180061e1-4a0a-4a44-b6b0-5e38c20d4427-kube-api-access-cpm6w\") pod \"openshift-config-operator-7777fb866f-m55dz\" (UID: \"180061e1-4a0a-4a44-b6b0-5e38c20d4427\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.807074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fc6091c-8d00-44c2-91ce-3f7b568bf355-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zhwpn\" (UID: \"4fc6091c-8d00-44c2-91ce-3f7b568bf355\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.817837 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.822407 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6m5v\" (UniqueName: \"kubernetes.io/projected/eaa8ebca-b85f-4719-9b09-0f39ea039f24-kube-api-access-b6m5v\") pod \"olm-operator-6b444d44fb-bmv4c\" (UID: \"eaa8ebca-b85f-4719-9b09-0f39ea039f24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.846560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.847017 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.847571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbss\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-kube-api-access-bnbss\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.864939 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkqt\" (UniqueName: \"kubernetes.io/projected/85e25636-c407-451f-8176-f15ca7097a97-kube-api-access-2hkqt\") pod \"package-server-manager-789f6589d5-555kr\" (UID: \"85e25636-c407-451f-8176-f15ca7097a97\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.865059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.872506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.879022 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tmxn6"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.884110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llc55\" (UniqueName: \"kubernetes.io/projected/5212b3b6-8f0a-47b3-9814-b093c275e32d-kube-api-access-llc55\") pod \"etcd-operator-b45778765-468rj\" (UID: \"5212b3b6-8f0a-47b3-9814-b093c275e32d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.884323 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.890866 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.908591 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:28 crc kubenswrapper[4730]: W0202 07:29:28.917977 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eca2f38_c23b_4874_b4a1_b57bafd24604.slice/crio-b89dc5137a18bc37639247c5f322000e2881a468f11e382f02a9ef203f7b9842 WatchSource:0}: Error finding container b89dc5137a18bc37639247c5f322000e2881a468f11e382f02a9ef203f7b9842: Status 404 returned error can't find the container with id b89dc5137a18bc37639247c5f322000e2881a468f11e382f02a9ef203f7b9842 Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.921883 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xtp9g"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.922276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" event={"ID":"d3052bac-69ea-478e-963c-3951dd878ac2","Type":"ContainerStarted","Data":"04c10c3f53197231028cb42dbbca0837cda82a398c5adb74cd68897ad252c484"} Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.928242 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ca56223c-bd37-4732-90ed-5b714bf35831-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trn9r\" (UID: \"ca56223c-bd37-4732-90ed-5b714bf35831\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.950740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnnrv\" (UniqueName: \"kubernetes.io/projected/4741fab0-5de7-4ba2-af2a-ca79c0de10d6-kube-api-access-nnnrv\") pod \"multus-admission-controller-857f4d67dd-pk5cc\" (UID: \"4741fab0-5de7-4ba2-af2a-ca79c0de10d6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.958075 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.958975 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" event={"ID":"9f07ac35-374b-4f55-af36-db35361500c4","Type":"ContainerStarted","Data":"70029b7d662886721485f0837781f9388bc3eed936f124a10f2805a52a77f2d1"} Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.959005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" event={"ID":"9f07ac35-374b-4f55-af36-db35361500c4","Type":"ContainerStarted","Data":"8b7e28f6b4bc79b47a8668523ed427f8fa8fab0648bb7bdb96bfdfc48a555220"} Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.960540 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.965353 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-djvb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.965579 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.965601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da86c860-a495-4d5f-8084-32c64a497e52-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nvzsg\" (UID: \"da86c860-a495-4d5f-8084-32c64a497e52\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.991046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" event={"ID":"29a89462-5b24-4924-a8a7-497b23f341e9","Type":"ContainerStarted","Data":"b2f71ad620715f93a0585ca7e892f4640d9e4fde61bff96d348ac1eff0325b0f"} Feb 02 07:29:28 crc kubenswrapper[4730]: I0202 07:29:28.997540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" event={"ID":"8e988b50-280e-49d0-b7d2-ae606685dc16","Type":"ContainerStarted","Data":"2360b1da151aa7cc4aa23047ec1770582ef96d2d41e7b91e808d45193b33aa43"} Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.010917 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r8rf5" event={"ID":"00a8df73-2822-496c-8b52-435531e7cbf7","Type":"ContainerStarted","Data":"419a2690630bd221b5c530334c04209509138de32e7aef03b03a1925bcb803bd"} Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.010957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r8rf5" event={"ID":"00a8df73-2822-496c-8b52-435531e7cbf7","Type":"ContainerStarted","Data":"e6c2db517f53bb0433a19974189012c2267da96494b2a64d32fc68ab72765212"} Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.011429 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.020227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.022383 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-r8rf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.022423 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r8rf5" podUID="00a8df73-2822-496c-8b52-435531e7cbf7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.025863 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.029428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmg5t\" (UniqueName: \"kubernetes.io/projected/ccdd934a-e3ae-459f-b8a6-20349fae2c4d-kube-api-access-qmg5t\") pod \"router-default-5444994796-229kn\" (UID: \"ccdd934a-e3ae-459f-b8a6-20349fae2c4d\") " pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.031720 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdr6w\" (UniqueName: \"kubernetes.io/projected/49b610d0-f480-4bb9-80eb-919d3301ffd4-kube-api-access-rdr6w\") pod \"ingress-operator-5b745b69d9-nncxk\" (UID: \"49b610d0-f480-4bb9-80eb-919d3301ffd4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.033537 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqzh\" (UniqueName: \"kubernetes.io/projected/9203a95c-f4a4-449d-9f1a-d44338c975e7-kube-api-access-4bqzh\") pod \"catalog-operator-68c6474976-b5dpt\" (UID: \"9203a95c-f4a4-449d-9f1a-d44338c975e7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.042351 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.053292 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.059514 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.062143 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mcz6"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.099481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.109430 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122067 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122103 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122128 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122146 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-images\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122244 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122268 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7xn\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122368 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f70d0fa-f17f-4743-bee7-0d3a5a728721-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.122595 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:29.622576284 +0000 UTC m=+143.043779692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122742 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f70d0fa-f17f-4743-bee7-0d3a5a728721-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpz8s\" (UniqueName: \"kubernetes.io/projected/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-kube-api-access-dpz8s\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.122934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4k6\" (UniqueName: \"kubernetes.io/projected/4f70d0fa-f17f-4743-bee7-0d3a5a728721-kube-api-access-7c4k6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.123003 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nm64\" (UniqueName: \"kubernetes.io/projected/7bceaae1-36db-4899-bd71-6eba4448c9dd-kube-api-access-7nm64\") pod \"migrator-59844c95c7-r4mrs\" (UID: \"7bceaae1-36db-4899-bd71-6eba4448c9dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.123093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-proxy-tls\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.161912 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.163339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.181979 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.186046 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8"] Feb 02 07:29:29 crc kubenswrapper[4730]: W0202 07:29:29.215900 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d50625_677d_463d_9439_2d7fd88fb649.slice/crio-a6535f7c5336bd6c81da620eca492b62442298a91ee2f243f5f39d6f596986ef WatchSource:0}: Error finding container a6535f7c5336bd6c81da620eca492b62442298a91ee2f243f5f39d6f596986ef: Status 404 returned error can't find the container with id a6535f7c5336bd6c81da620eca492b62442298a91ee2f243f5f39d6f596986ef Feb 02 07:29:29 crc kubenswrapper[4730]: W0202 07:29:29.224439 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb8e6c76_96fc_4cac_b3e5_98227cddfb06.slice/crio-3faa03090180ba2b86720861e669b040422abd5e468051eb36c5f45ae0db0d0d WatchSource:0}: Error finding container 3faa03090180ba2b86720861e669b040422abd5e468051eb36c5f45ae0db0d0d: Status 404 returned error can't find the container with id 3faa03090180ba2b86720861e669b040422abd5e468051eb36c5f45ae0db0d0d Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.224626 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.224772 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:29.724757052 +0000 UTC m=+143.145960400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.224925 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-cabundle\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.224961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-key\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.224999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6j2b\" (UniqueName: \"kubernetes.io/projected/97a0c310-7193-474d-832f-9248cf4624c3-kube-api-access-m6j2b\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225013 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e81c47-355e-4edc-a764-8964090df5b4-config\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225039 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225073 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225185 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-plugins-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-csi-data-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225269 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225293 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-images\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225336 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225367 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-metrics-tls\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225477 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4v27\" (UniqueName: \"kubernetes.io/projected/7570e3fa-740c-4ea4-acb2-c61838123083-kube-api-access-k4v27\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225511 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb4t\" (UniqueName: \"kubernetes.io/projected/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-kube-api-access-8rb4t\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5jf\" (UniqueName: \"kubernetes.io/projected/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-kube-api-access-ws5jf\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225564 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-socket-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225620 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225664 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225680 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7xn\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f70d0fa-f17f-4743-bee7-0d3a5a728721-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-registration-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225790 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225804 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-config-volume\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-node-bootstrap-token\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f70d0fa-f17f-4743-bee7-0d3a5a728721-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.225904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226036 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpz8s\" (UniqueName: \"kubernetes.io/projected/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-kube-api-access-dpz8s\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226055 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4k6\" (UniqueName: \"kubernetes.io/projected/4f70d0fa-f17f-4743-bee7-0d3a5a728721-kube-api-access-7c4k6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nm64\" (UniqueName: \"kubernetes.io/projected/7bceaae1-36db-4899-bd71-6eba4448c9dd-kube-api-access-7nm64\") pod \"migrator-59844c95c7-r4mrs\" (UID: \"7bceaae1-36db-4899-bd71-6eba4448c9dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226099 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtbjg\" (UniqueName: \"kubernetes.io/projected/12f62344-8d04-4340-a671-8f0e49012692-kube-api-access-jtbjg\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fdm\" (UniqueName: \"kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.226876 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.227138 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-certs\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.227196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrhx\" (UniqueName: \"kubernetes.io/projected/64e81c47-355e-4edc-a764-8964090df5b4-kube-api-access-9xrhx\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.228031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-mountpoint-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.228082 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkqj\" (UniqueName: \"kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.229206 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7570e3fa-740c-4ea4-acb2-c61838123083-cert\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.229643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-proxy-tls\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.229681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e81c47-355e-4edc-a764-8964090df5b4-serving-cert\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.232776 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.233025 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-images\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.238656 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.238903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f70d0fa-f17f-4743-bee7-0d3a5a728721-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.254611 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.256776 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f70d0fa-f17f-4743-bee7-0d3a5a728721-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.257366 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:29.757349972 +0000 UTC m=+143.178553320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.258292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.262671 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.283231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-proxy-tls\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.284242 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nm64\" (UniqueName: \"kubernetes.io/projected/7bceaae1-36db-4899-bd71-6eba4448c9dd-kube-api-access-7nm64\") pod \"migrator-59844c95c7-r4mrs\" (UID: \"7bceaae1-36db-4899-bd71-6eba4448c9dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.286577 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7xn\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.301847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.326769 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.330984 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wxm94"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331211 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-certs\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrhx\" (UniqueName: \"kubernetes.io/projected/64e81c47-355e-4edc-a764-8964090df5b4-kube-api-access-9xrhx\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331426 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-mountpoint-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331447 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkqj\" (UniqueName: \"kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7570e3fa-740c-4ea4-acb2-c61838123083-cert\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331482 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e81c47-355e-4edc-a764-8964090df5b4-serving-cert\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-cabundle\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331517 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-key\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6j2b\" (UniqueName: \"kubernetes.io/projected/97a0c310-7193-474d-832f-9248cf4624c3-kube-api-access-m6j2b\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e81c47-355e-4edc-a764-8964090df5b4-config\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331567 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-plugins-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331599 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-csi-data-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331631 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331644 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-metrics-tls\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331662 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4v27\" (UniqueName: \"kubernetes.io/projected/7570e3fa-740c-4ea4-acb2-c61838123083-kube-api-access-k4v27\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331678 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb4t\" (UniqueName: \"kubernetes.io/projected/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-kube-api-access-8rb4t\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5jf\" (UniqueName: \"kubernetes.io/projected/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-kube-api-access-ws5jf\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-socket-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-registration-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331745 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-config-volume\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331762 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-node-bootstrap-token\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331782 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331817 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbjg\" (UniqueName: \"kubernetes.io/projected/12f62344-8d04-4340-a671-8f0e49012692-kube-api-access-jtbjg\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.331834 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fdm\" (UniqueName: \"kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.331992 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:29.831979913 +0000 UTC m=+143.253183261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.337820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-mountpoint-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.337896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-csi-data-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.338276 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4k6\" (UniqueName: \"kubernetes.io/projected/4f70d0fa-f17f-4743-bee7-0d3a5a728721-kube-api-access-7c4k6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztnjq\" (UID: \"4f70d0fa-f17f-4743-bee7-0d3a5a728721\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.338456 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-socket-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.338508 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-registration-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.339006 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-config-volume\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.339036 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-cabundle\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.339564 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.339758 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e81c47-355e-4edc-a764-8964090df5b4-config\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.342863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-certs\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.344550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.352252 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12f62344-8d04-4340-a671-8f0e49012692-plugins-dir\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.354490 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.359838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64e81c47-355e-4edc-a764-8964090df5b4-serving-cert\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.396589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fdm\" (UniqueName: \"kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.399361 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume\") pod \"collect-profiles-29500275-7fz75\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.401072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/97a0c310-7193-474d-832f-9248cf4624c3-node-bootstrap-token\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.401919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7570e3fa-740c-4ea4-acb2-c61838123083-cert\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.404601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-signing-key\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.406520 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-metrics-tls\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.407043 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.410395 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpz8s\" (UniqueName: \"kubernetes.io/projected/fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5-kube-api-access-dpz8s\") pod \"machine-config-operator-74547568cd-z57tj\" (UID: \"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.424034 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.432536 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.433270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.433539 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:29.933528324 +0000 UTC m=+143.354731672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.439599 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.439993 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrhx\" (UniqueName: \"kubernetes.io/projected/64e81c47-355e-4edc-a764-8964090df5b4-kube-api-access-9xrhx\") pod \"service-ca-operator-777779d784-fmmcg\" (UID: \"64e81c47-355e-4edc-a764-8964090df5b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.450625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkqj\" (UniqueName: \"kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj\") pod \"marketplace-operator-79b997595-hs55q\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.471590 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5jf\" (UniqueName: \"kubernetes.io/projected/4d819ddc-33bc-49f6-8f94-d6d4ad8254bd-kube-api-access-ws5jf\") pod \"dns-default-2r4hv\" (UID: \"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd\") " pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.486877 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m55dz"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.508229 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.509026 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.508286 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.509798 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbjg\" (UniqueName: \"kubernetes.io/projected/12f62344-8d04-4340-a671-8f0e49012692-kube-api-access-jtbjg\") pod \"csi-hostpathplugin-x6l8n\" (UID: \"12f62344-8d04-4340-a671-8f0e49012692\") " pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.510837 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.512641 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.515941 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6j2b\" (UniqueName: \"kubernetes.io/projected/97a0c310-7193-474d-832f-9248cf4624c3-kube-api-access-m6j2b\") pod \"machine-config-server-2ls8j\" (UID: \"97a0c310-7193-474d-832f-9248cf4624c3\") " pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.523475 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ls8j" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.530640 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.537230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.537540 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.037523949 +0000 UTC m=+143.458727297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.540049 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4v27\" (UniqueName: \"kubernetes.io/projected/7570e3fa-740c-4ea4-acb2-c61838123083-kube-api-access-k4v27\") pod \"ingress-canary-wbkch\" (UID: \"7570e3fa-740c-4ea4-acb2-c61838123083\") " pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.545765 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb4t\" (UniqueName: \"kubernetes.io/projected/c3b296ea-7ddf-4d5e-aec2-98ddca56f04b-kube-api-access-8rb4t\") pod \"service-ca-9c57cc56f-h5h45\" (UID: \"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b\") " pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.560358 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.638214 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.638598 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.138583937 +0000 UTC m=+143.559787285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.667377 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.695238 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.728318 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tqvrx"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.739909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.740208 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.24019407 +0000 UTC m=+143.661397408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.751739 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.818998 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.838064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wbkch" Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.840961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.841253 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.341241597 +0000 UTC m=+143.762444945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: W0202 07:29:29.918207 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4994d05d_dfe5_42e1_81e9_8b4a09fb8934.slice/crio-e06ca8c8d88f61c3d30c7be365beb77a31e5fcc9e04cd859d6de0484424459cc WatchSource:0}: Error finding container e06ca8c8d88f61c3d30c7be365beb77a31e5fcc9e04cd859d6de0484424459cc: Status 404 returned error can't find the container with id e06ca8c8d88f61c3d30c7be365beb77a31e5fcc9e04cd859d6de0484424459cc Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.939109 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.946113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:29 crc kubenswrapper[4730]: E0202 07:29:29.951239 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.44665521 +0000 UTC m=+143.867858558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.957212 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r"] Feb 02 07:29:29 crc kubenswrapper[4730]: I0202 07:29:29.978910 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.009241 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.018828 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-468rj"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.034254 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.046523 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" event={"ID":"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b","Type":"ContainerStarted","Data":"c9e4c801032c00bab93ed84a37b4fafb193ea950dfeb641a18ab081f2d540bf4"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.051830 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.066616 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pk5cc"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.070145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.070659 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.570642934 +0000 UTC m=+143.991846282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.093849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" event={"ID":"7eca2f38-c23b-4874-b4a1-b57bafd24604","Type":"ContainerStarted","Data":"e181ee0badb9430e2066e24e99d4608c32c9974b30c869961981bf4cbbb5ccdc"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.094108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" event={"ID":"7eca2f38-c23b-4874-b4a1-b57bafd24604","Type":"ContainerStarted","Data":"59f4eacfed172dd4393e57c80312bfdd33d9412f9069fcc66175c2a511f5d142"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.094123 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" event={"ID":"7eca2f38-c23b-4874-b4a1-b57bafd24604","Type":"ContainerStarted","Data":"b89dc5137a18bc37639247c5f322000e2881a468f11e382f02a9ef203f7b9842"} Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.095243 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca56223c_bd37_4732_90ed_5b714bf35831.slice/crio-b55a0a22abc342488f4b33c70e8f72394955c796892ddec0fc238cac13a598d5 WatchSource:0}: Error finding container b55a0a22abc342488f4b33c70e8f72394955c796892ddec0fc238cac13a598d5: Status 404 returned error can't find the container with id b55a0a22abc342488f4b33c70e8f72394955c796892ddec0fc238cac13a598d5 Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.096476 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.101825 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.102653 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e988b50-280e-49d0-b7d2-ae606685dc16" containerID="b59d9d318689614c7f18c2e6810619317eb1e6dd44f4790629068d8c206ec809" exitCode=0 Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.102777 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" event={"ID":"8e988b50-280e-49d0-b7d2-ae606685dc16","Type":"ContainerDied","Data":"b59d9d318689614c7f18c2e6810619317eb1e6dd44f4790629068d8c206ec809"} Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.122883 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bceaae1_36db_4899_bd71_6eba4448c9dd.slice/crio-5180fc78e432bdd0492c1f0659646f46f0925a30eb2d0d837bbf6e22b4bc7278 WatchSource:0}: Error finding container 5180fc78e432bdd0492c1f0659646f46f0925a30eb2d0d837bbf6e22b4bc7278: Status 404 returned error can't find the container with id 5180fc78e432bdd0492c1f0659646f46f0925a30eb2d0d837bbf6e22b4bc7278 Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.126504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" event={"ID":"180061e1-4a0a-4a44-b6b0-5e38c20d4427","Type":"ContainerStarted","Data":"1f9b909ef394f6539b07491bca5e1ca8a39a978cdae417a3aeec8c4e1f115d6f"} Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.146474 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b610d0_f480_4bb9_80eb_919d3301ffd4.slice/crio-a87c6dac992e3ad4257a91246ac356e88b966f68c12df049d1c224adcb71a67b WatchSource:0}: Error finding container a87c6dac992e3ad4257a91246ac356e88b966f68c12df049d1c224adcb71a67b: Status 404 returned error can't find the container with id a87c6dac992e3ad4257a91246ac356e88b966f68c12df049d1c224adcb71a67b Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.146712 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" event={"ID":"7c27212e-7271-4169-9aa7-8b2128167055","Type":"ContainerStarted","Data":"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.146739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" event={"ID":"7c27212e-7271-4169-9aa7-8b2128167055","Type":"ContainerStarted","Data":"a27023c5fd8c98efc724f97ca3a09f8e74cb10072b49462c7850baa209f0818b"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.147535 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.175651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.176547 4730 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4hw4w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.176589 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" podUID="7c27212e-7271-4169-9aa7-8b2128167055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.177107 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.677089564 +0000 UTC m=+144.098292912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.185354 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-229kn" event={"ID":"ccdd934a-e3ae-459f-b8a6-20349fae2c4d","Type":"ContainerStarted","Data":"df826e850d7fc0d47a9035e858111affaf804e48bd7948e9f09620714bff09cc"} Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.202666 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f70d0fa_f17f_4743_bee7_0d3a5a728721.slice/crio-567da43501236ef1497845dff8bac83ad4937aaf13f1bd544daed5b3c9ee2e68 WatchSource:0}: Error finding container 567da43501236ef1497845dff8bac83ad4937aaf13f1bd544daed5b3c9ee2e68: Status 404 returned error can't find the container with id 567da43501236ef1497845dff8bac83ad4937aaf13f1bd544daed5b3c9ee2e68 Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.212923 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" event={"ID":"b84477ff-bcf6-4967-9052-df8ffa8e0003","Type":"ContainerStarted","Data":"9dd29043f55fbb4437a369ff2b7938ad0f3448deeb699fa5d2ff95c742d7151b"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.227610 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.236676 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" event={"ID":"bb8e6c76-96fc-4cac-b3e5-98227cddfb06","Type":"ContainerStarted","Data":"c4c4ae6ec0c41723138e6fd621ab58b4988a4b572430f8ba340a690a5484277c"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.236827 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" event={"ID":"bb8e6c76-96fc-4cac-b3e5-98227cddfb06","Type":"ContainerStarted","Data":"3faa03090180ba2b86720861e669b040422abd5e468051eb36c5f45ae0db0d0d"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.248714 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" event={"ID":"01d50625-677d-463d-9439-2d7fd88fb649","Type":"ContainerStarted","Data":"d536b46800784026280cecc85439e2fe6a4b74b2b6b53b439e80f7db558e529a"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.248754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" event={"ID":"01d50625-677d-463d-9439-2d7fd88fb649","Type":"ContainerStarted","Data":"a6535f7c5336bd6c81da620eca492b62442298a91ee2f243f5f39d6f596986ef"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.249503 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.260755 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ls8j" event={"ID":"97a0c310-7193-474d-832f-9248cf4624c3","Type":"ContainerStarted","Data":"708a38b9db2666a1ec9594b96325d02c731f85e1f4ea3a52f6aeb858235c6cd9"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.264983 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72nv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.265025 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.275773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" event={"ID":"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb","Type":"ContainerStarted","Data":"0d899294ebeb90ab4246714f7825ceb53fefedc8c8c8aad5f9437f92e797e14e"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.275810 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" event={"ID":"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb","Type":"ContainerStarted","Data":"3c97bfdf81a284b2fb67b72977ee9e335e1c87d9594663f0b5c7da64957ab1b7"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.279878 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.280153 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" event={"ID":"4fc6091c-8d00-44c2-91ce-3f7b568bf355","Type":"ContainerStarted","Data":"8faafa26090f3a89046e119d52b1f488347de7c8a48d276a185b56a7db570d9f"} Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.282427 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.782413124 +0000 UTC m=+144.203616472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.282554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" event={"ID":"3981d83c-cd31-4d75-b3b3-0b087c28a16c","Type":"ContainerStarted","Data":"d95de184acfbdf821a9947d546d2418bab7be6fbcba45d1cec2ea841490eefe7"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.282583 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" event={"ID":"3981d83c-cd31-4d75-b3b3-0b087c28a16c","Type":"ContainerStarted","Data":"2aa79bf3f84ccf65982acf606b626b1d6d16d7914e54e718237f8e0d5591177a"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.284697 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.284721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" event={"ID":"f7dc234b-4559-460c-a4fe-85cedc72c368","Type":"ContainerStarted","Data":"299519617a384a0e41d0a916c80b13baab3829a206477d38269faa2e25c68c2d"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.287104 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" event={"ID":"dcac256f-ab91-4849-b215-dcf74506d0d2","Type":"ContainerStarted","Data":"8bdf67aa706cb4ea9dad93aa32590191e220126f6e2f3494a16baa9657e3666b"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.301302 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.305575 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6l8n"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.308492 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" event={"ID":"95763bc4-bfd7-4afe-8a38-22770288a195","Type":"ContainerStarted","Data":"78ef82814cabd3aad54f7bc57d703722ca9403822cdd1d8828f922e586db9fda"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.317633 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" event={"ID":"d3052bac-69ea-478e-963c-3951dd878ac2","Type":"ContainerStarted","Data":"72beecadd48f036dace1c2e2013a568de91e4f68effa8b5937230ec1713a0373"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.317675 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" event={"ID":"d3052bac-69ea-478e-963c-3951dd878ac2","Type":"ContainerStarted","Data":"f7a978f1e4b44a4c292491d4d0e044bfceb467b109b6ceabab5e989eabdc6040"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.320883 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" event={"ID":"85e25636-c407-451f-8176-f15ca7097a97","Type":"ContainerStarted","Data":"ebba1548065031b7e87fb4d715ccc82b65296a54bfda24114c09bfc4343b9d43"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.325864 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" event={"ID":"eaa8ebca-b85f-4719-9b09-0f39ea039f24","Type":"ContainerStarted","Data":"ac9f5051e92a851e24a3c1913e3d87d87fb652bf7e9f2c969c4e4baeb671194f"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.332394 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" event={"ID":"db31bea7-e8e5-4390-8e72-fb8871151dd5","Type":"ContainerStarted","Data":"2cd3e302c3ed725b5917e3e8d60ee1896fa592443754755a2fa0ae8406403053"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.332420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" event={"ID":"db31bea7-e8e5-4390-8e72-fb8871151dd5","Type":"ContainerStarted","Data":"4cff65a0bb6c5edf6179e483eaa51cf4d2e9ebeb6b37c3b53225653786f97a78"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.350505 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" event={"ID":"23885584-d53a-44da-879e-9d359c726f2c","Type":"ContainerStarted","Data":"330cf9d5717c66cf8abf252ac0ea36a660956603c0ced4b0c6e1fd75bfaccd27"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.350542 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" event={"ID":"23885584-d53a-44da-879e-9d359c726f2c","Type":"ContainerStarted","Data":"d4e36516a6b3039680694fe8ffbeeab4888cb700fca55eb766119e03c2ccb23d"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.350954 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.365489 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-6mcz6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.365527 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" podUID="23885584-d53a-44da-879e-9d359c726f2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.367260 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" event={"ID":"29a89462-5b24-4924-a8a7-497b23f341e9","Type":"ContainerStarted","Data":"326f6fcc4f9f63dfbe5071eed840c58db84b3f392cdd97416a9871ffb4b5e669"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.371115 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tqvrx" event={"ID":"4994d05d-dfe5-42e1-81e9-8b4a09fb8934","Type":"ContainerStarted","Data":"e06ca8c8d88f61c3d30c7be365beb77a31e5fcc9e04cd859d6de0484424459cc"} Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.372792 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2r4hv"] Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.373637 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-r8rf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.373666 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r8rf5" podUID="00a8df73-2822-496c-8b52-435531e7cbf7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.380323 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.380865 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.382659 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.88264137 +0000 UTC m=+144.303844718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.485486 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.486950 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:30.986935124 +0000 UTC m=+144.408138472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.513026 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" podStartSLOduration=122.513005242 podStartE2EDuration="2m2.513005242s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:30.505216586 +0000 UTC m=+143.926419954" watchObservedRunningTime="2026-02-02 07:29:30.513005242 +0000 UTC m=+143.934208590" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.543800 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-r8rf5" podStartSLOduration=123.543779194 podStartE2EDuration="2m3.543779194s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:30.540911159 +0000 UTC m=+143.962114517" watchObservedRunningTime="2026-02-02 07:29:30.543779194 +0000 UTC m=+143.964982552" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.587144 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.587475 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.087452807 +0000 UTC m=+144.508656155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.594509 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h5h45"] Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.689734 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b296ea_7ddf_4d5e_aec2_98ddca56f04b.slice/crio-64194ffd54c32f5341711f8138fa805cd739b0a4129087b4ba81dc0b411684f6 WatchSource:0}: Error finding container 64194ffd54c32f5341711f8138fa805cd739b0a4129087b4ba81dc0b411684f6: Status 404 returned error can't find the container with id 64194ffd54c32f5341711f8138fa805cd739b0a4129087b4ba81dc0b411684f6 Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.690578 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.190562039 +0000 UTC m=+144.611765387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.690291 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.702980 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wbkch"] Feb 02 07:29:30 crc kubenswrapper[4730]: W0202 07:29:30.787212 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7570e3fa_740c_4ea4_acb2_c61838123083.slice/crio-3a4c2bc77b040688127f520864362566cd892f238bfd837c59899fc69feb7d1d WatchSource:0}: Error finding container 3a4c2bc77b040688127f520864362566cd892f238bfd837c59899fc69feb7d1d: Status 404 returned error can't find the container with id 3a4c2bc77b040688127f520864362566cd892f238bfd837c59899fc69feb7d1d Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.798411 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.799054 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.299038173 +0000 UTC m=+144.720241521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:30 crc kubenswrapper[4730]: I0202 07:29:30.904115 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:30 crc kubenswrapper[4730]: E0202 07:29:30.904405 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.404394775 +0000 UTC m=+144.825598113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.005065 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.005276 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.505251027 +0000 UTC m=+144.926454375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.005579 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.005844 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.505833873 +0000 UTC m=+144.927037211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.067308 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" podStartSLOduration=124.067291815 podStartE2EDuration="2m4.067291815s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.040734374 +0000 UTC m=+144.461937712" watchObservedRunningTime="2026-02-02 07:29:31.067291815 +0000 UTC m=+144.488495163" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.106357 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.106760 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.606746517 +0000 UTC m=+145.027949855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.107976 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" podStartSLOduration=124.107958409 podStartE2EDuration="2m4.107958409s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.068445295 +0000 UTC m=+144.489648653" watchObservedRunningTime="2026-02-02 07:29:31.107958409 +0000 UTC m=+144.529161757" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.144139 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tmxn6" podStartSLOduration=123.144121603 podStartE2EDuration="2m3.144121603s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.141872684 +0000 UTC m=+144.563076032" watchObservedRunningTime="2026-02-02 07:29:31.144121603 +0000 UTC m=+144.565324951" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.207618 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.208091 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.708078272 +0000 UTC m=+145.129281620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.221336 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xrslf" podStartSLOduration=124.221319031 podStartE2EDuration="2m4.221319031s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.218829016 +0000 UTC m=+144.640032364" watchObservedRunningTime="2026-02-02 07:29:31.221319031 +0000 UTC m=+144.642522379" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.302730 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xtp9g" podStartSLOduration=124.30271313 podStartE2EDuration="2m4.30271313s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.29855317 +0000 UTC m=+144.719756528" watchObservedRunningTime="2026-02-02 07:29:31.30271313 +0000 UTC m=+144.723916478" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.309045 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.311945 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.811902833 +0000 UTC m=+145.233106181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.312515 4730 csr.go:261] certificate signing request csr-hl4jc is approved, waiting to be issued Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.312541 4730 csr.go:257] certificate signing request csr-hl4jc is issued Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.345827 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-84q5s" podStartSLOduration=124.345807098 podStartE2EDuration="2m4.345807098s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.345414617 +0000 UTC m=+144.766617965" watchObservedRunningTime="2026-02-02 07:29:31.345807098 +0000 UTC m=+144.767010446" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.377497 4730 generic.go:334] "Generic (PLEG): container finished" podID="180061e1-4a0a-4a44-b6b0-5e38c20d4427" containerID="bf00d77629483a8e409a02c76ac3526c232c737cc1b164b145a9928def13032b" exitCode=0 Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.377558 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" event={"ID":"180061e1-4a0a-4a44-b6b0-5e38c20d4427","Type":"ContainerDied","Data":"bf00d77629483a8e409a02c76ac3526c232c737cc1b164b145a9928def13032b"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.395071 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" podStartSLOduration=124.395051968 podStartE2EDuration="2m4.395051968s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.385461525 +0000 UTC m=+144.806664883" watchObservedRunningTime="2026-02-02 07:29:31.395051968 +0000 UTC m=+144.816255316" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.405473 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" event={"ID":"b84477ff-bcf6-4967-9052-df8ffa8e0003","Type":"ContainerStarted","Data":"2ed7f044b36644f5f7a3f5976fe6a2d696a84c7737bcef75964d036fdd74ae8a"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.410423 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.410778 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:31.910765923 +0000 UTC m=+145.331969271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.426561 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" event={"ID":"4fc6091c-8d00-44c2-91ce-3f7b568bf355","Type":"ContainerStarted","Data":"e4ccba5a3c89a50dc573809494f5b6c3d5822470ffa6abfc014722706489e3f6"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.428314 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jvns9" podStartSLOduration=124.428295305 podStartE2EDuration="2m4.428295305s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.425201694 +0000 UTC m=+144.846405042" watchObservedRunningTime="2026-02-02 07:29:31.428295305 +0000 UTC m=+144.849498653" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.478924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" event={"ID":"9203a95c-f4a4-449d-9f1a-d44338c975e7","Type":"ContainerStarted","Data":"457e73ac1e209327b277f4d37ec10ca8e463070526bb646ba914acdeac549078"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.479039 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" event={"ID":"9203a95c-f4a4-449d-9f1a-d44338c975e7","Type":"ContainerStarted","Data":"40b4abdb25b141c8d44a052be7fc27acb1756357a5e3646f3620155130f08092"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.479876 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.490018 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2r4hv" event={"ID":"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd","Type":"ContainerStarted","Data":"dabb92fad93f1e5af9e918bec3d78fda849b133db8684aaf6c761ab020118910"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.503118 4730 generic.go:334] "Generic (PLEG): container finished" podID="bb8e6c76-96fc-4cac-b3e5-98227cddfb06" containerID="c4c4ae6ec0c41723138e6fd621ab58b4988a4b572430f8ba340a690a5484277c" exitCode=0 Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.503187 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" event={"ID":"bb8e6c76-96fc-4cac-b3e5-98227cddfb06","Type":"ContainerDied","Data":"c4c4ae6ec0c41723138e6fd621ab58b4988a4b572430f8ba340a690a5484277c"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.505794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" event={"ID":"85e25636-c407-451f-8176-f15ca7097a97","Type":"ContainerStarted","Data":"0555f664b08e7ef447c962f5edbc0b6a0813178112437072c35b206ab8809c13"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.507503 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" event={"ID":"917fb6df-688b-4f0e-98eb-4bb26b37f6f8","Type":"ContainerStarted","Data":"f818268693b1f2cb599fe1e3b493ace334ce372800b8b68d1ce6c9a726e1d5e8"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.510887 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.512384 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.012364985 +0000 UTC m=+145.433568333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.522499 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-b5dpt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.522548 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" podUID="9203a95c-f4a4-449d-9f1a-d44338c975e7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.525531 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h5wzs" podStartSLOduration=124.525518982 podStartE2EDuration="2m4.525518982s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.523606732 +0000 UTC m=+144.944810070" watchObservedRunningTime="2026-02-02 07:29:31.525518982 +0000 UTC m=+144.946722330" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.536411 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" event={"ID":"da86c860-a495-4d5f-8084-32c64a497e52","Type":"ContainerStarted","Data":"35b6f4756e0b7c80b7b87c1760d98b976472a311947aed624581871953b774b8"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.584800 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" event={"ID":"ca56223c-bd37-4732-90ed-5b714bf35831","Type":"ContainerStarted","Data":"2b4711a62fd4c997fb792d41873cec8749cb27eb1d5a443f1314ff288ffb1c5d"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.584841 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" event={"ID":"ca56223c-bd37-4732-90ed-5b714bf35831","Type":"ContainerStarted","Data":"b55a0a22abc342488f4b33c70e8f72394955c796892ddec0fc238cac13a598d5"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.596400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" event={"ID":"64e81c47-355e-4edc-a764-8964090df5b4","Type":"ContainerStarted","Data":"7f05647e2b0a62d7fa9a70ae8a6a1e83902e5c0088676ed4a172cad5490c0730"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.596438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" event={"ID":"64e81c47-355e-4edc-a764-8964090df5b4","Type":"ContainerStarted","Data":"3eef53f60c6e249828e8c8faad0a83b5064aa43c20bc6a6e6b8c9a3e275278f4"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.598020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tqvrx" event={"ID":"4994d05d-dfe5-42e1-81e9-8b4a09fb8934","Type":"ContainerStarted","Data":"83ad6aa66ee52155463674008d036a93dc26c27d6bc6daddf18c1ef1fafa7aa8"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.599759 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ls8j" event={"ID":"97a0c310-7193-474d-832f-9248cf4624c3","Type":"ContainerStarted","Data":"fcb0d13db21b8e1156635b22714efbc9ab14dd3276bf616e21e1dc342bc78e99"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.612268 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.613678 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.113666319 +0000 UTC m=+145.534869667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.627516 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" event={"ID":"5212b3b6-8f0a-47b3-9814-b093c275e32d","Type":"ContainerStarted","Data":"56992a0e827ce5eec847dba6116536c0504c1e8b969385bd840813b658657745"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.627716 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" event={"ID":"5212b3b6-8f0a-47b3-9814-b093c275e32d","Type":"ContainerStarted","Data":"4c6134a5cdecc477e883aafc0edc69026c3cb96754ab6e561a6eff3736371e61"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.643775 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-229kn" event={"ID":"ccdd934a-e3ae-459f-b8a6-20349fae2c4d","Type":"ContainerStarted","Data":"6f3e6f981ac9dbdcb292d6fc9a077686722bb2d4d9b547d397b83e236ea01e2c"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.688300 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wbkch" event={"ID":"7570e3fa-740c-4ea4-acb2-c61838123083","Type":"ContainerStarted","Data":"3a4c2bc77b040688127f520864362566cd892f238bfd837c59899fc69feb7d1d"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.693351 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zhwpn" podStartSLOduration=124.693337532 podStartE2EDuration="2m4.693337532s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.692638334 +0000 UTC m=+145.113841682" watchObservedRunningTime="2026-02-02 07:29:31.693337532 +0000 UTC m=+145.114540880" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.701667 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" event={"ID":"7bceaae1-36db-4899-bd71-6eba4448c9dd","Type":"ContainerStarted","Data":"42b1f5e25bd1b741f89c14ffd8ee7db22d36395dc72f3dd9909091096f0ff71f"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.701702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" event={"ID":"7bceaae1-36db-4899-bd71-6eba4448c9dd","Type":"ContainerStarted","Data":"5180fc78e432bdd0492c1f0659646f46f0925a30eb2d0d837bbf6e22b4bc7278"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.718663 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.718972 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.218947249 +0000 UTC m=+145.640150597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.719251 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.721124 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" event={"ID":"4741fab0-5de7-4ba2-af2a-ca79c0de10d6","Type":"ContainerStarted","Data":"e04eaec66f7901136d505f4b0fc3f73e2b85421b13e459fb8023724cc46e7fdd"} Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.721580 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.221568968 +0000 UTC m=+145.642772316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.744373 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmmcg" podStartSLOduration=123.744356229 podStartE2EDuration="2m3.744356229s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.742704886 +0000 UTC m=+145.163908244" watchObservedRunningTime="2026-02-02 07:29:31.744356229 +0000 UTC m=+145.165559577" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.779633 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" event={"ID":"49b610d0-f480-4bb9-80eb-919d3301ffd4","Type":"ContainerStarted","Data":"c63762165d204e52fcaac3f1be86573c9d3d4521064a74e5a3fc217f7366134d"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.779690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" event={"ID":"49b610d0-f480-4bb9-80eb-919d3301ffd4","Type":"ContainerStarted","Data":"a87c6dac992e3ad4257a91246ac356e88b966f68c12df049d1c224adcb71a67b"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.782720 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-229kn" podStartSLOduration=124.782702272 podStartE2EDuration="2m4.782702272s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.779916048 +0000 UTC m=+145.201119396" watchObservedRunningTime="2026-02-02 07:29:31.782702272 +0000 UTC m=+145.203905620" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.793673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" event={"ID":"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b","Type":"ContainerStarted","Data":"64194ffd54c32f5341711f8138fa805cd739b0a4129087b4ba81dc0b411684f6"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.799528 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" event={"ID":"95763bc4-bfd7-4afe-8a38-22770288a195","Type":"ContainerStarted","Data":"0e709e3e5a7fc5b6983d3631d74a72f103617f5324a5e8ac12ada802855ca82e"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.802037 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.812423 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jg5m8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.812484 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" podUID="95763bc4-bfd7-4afe-8a38-22770288a195" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.820678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.824840 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.322986305 +0000 UTC m=+145.744189653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.833763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" event={"ID":"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b","Type":"ContainerStarted","Data":"2bbc6e2ab53fff5dc901735a947d1b8a1e95a4a5d516e5dd23040bb5b67f61ca"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.848213 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tqvrx" podStartSLOduration=124.848194391 podStartE2EDuration="2m4.848194391s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.822881012 +0000 UTC m=+145.244084360" watchObservedRunningTime="2026-02-02 07:29:31.848194391 +0000 UTC m=+145.269397739" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.852009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" event={"ID":"12f62344-8d04-4340-a671-8f0e49012692","Type":"ContainerStarted","Data":"ed0404e85214eb85cb89c39e591aafd079baf0a9342cc446a88c3e51d27a7875"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.871428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" event={"ID":"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5","Type":"ContainerStarted","Data":"62cd863c66a3a9239f2787ef5afa2e6f08a68a4e6cf14e640d04655844ddf0cf"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.871478 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" event={"ID":"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5","Type":"ContainerStarted","Data":"4623bcb3764ccc4e3f3ed11993333ab649477da88b15cd203cb233d37ff2cbb4"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.904009 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trn9r" podStartSLOduration=124.903995554 podStartE2EDuration="2m4.903995554s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.848647183 +0000 UTC m=+145.269850531" watchObservedRunningTime="2026-02-02 07:29:31.903995554 +0000 UTC m=+145.325198902" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.921422 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" event={"ID":"b6a1bfca-20d7-4a1f-9b47-b77dbc19c3fb","Type":"ContainerStarted","Data":"ed5122ae8bce20b3f029c0b471ba9cd2e10bd7ac29396448b856a0638f108d4a"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.923869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" event={"ID":"eaa8ebca-b85f-4719-9b09-0f39ea039f24","Type":"ContainerStarted","Data":"7cfc4df0fbddefc17d2db399b0b98fc97b3a2ef487ff7ae974f1d7da831177f6"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.924447 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.925457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" event={"ID":"f7dc234b-4559-460c-a4fe-85cedc72c368","Type":"ContainerStarted","Data":"f92f576dcaabd5a8664e82f1c626bb417b8f37c9a4a54d10bd1948f716f54c2d"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.925991 4730 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bmv4c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.926018 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" podUID="eaa8ebca-b85f-4719-9b09-0f39ea039f24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.926872 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:31 crc kubenswrapper[4730]: E0202 07:29:31.954931 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.454916538 +0000 UTC m=+145.876119876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.958252 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" event={"ID":"dcac256f-ab91-4849-b215-dcf74506d0d2","Type":"ContainerStarted","Data":"6082e12857e1448f09b60ff3f9489cba3621ea7bf1e5c699dc2c353a571a145d"} Feb 02 07:29:31 crc kubenswrapper[4730]: I0202 07:29:31.967916 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" event={"ID":"8e988b50-280e-49d0-b7d2-ae606685dc16","Type":"ContainerStarted","Data":"99d2583f5edf8831d7eb6caae93e2554b138f86f3e9695022caecaaac5fd8e4b"} Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.031438 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.032588 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.532566748 +0000 UTC m=+145.953770096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.045510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" event={"ID":"de37b790-96db-42d1-8a4c-826e0a88bd97","Type":"ContainerStarted","Data":"f88d7df3224c90072b79a0e921bf5dc6668373fa2e328bcd4f92d4ac1152d8fa"} Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.045555 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" event={"ID":"de37b790-96db-42d1-8a4c-826e0a88bd97","Type":"ContainerStarted","Data":"d1167cf7c5c001ec91ccd3979de29ca0943092922fdd4c53e445f1f04658f54f"} Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.045508 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2ls8j" podStartSLOduration=6.045493459 podStartE2EDuration="6.045493459s" podCreationTimestamp="2026-02-02 07:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:31.905044832 +0000 UTC m=+145.326248180" watchObservedRunningTime="2026-02-02 07:29:32.045493459 +0000 UTC m=+145.466696807" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.046928 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.047127 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-468rj" podStartSLOduration=125.047117392 podStartE2EDuration="2m5.047117392s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.005659458 +0000 UTC m=+145.426862816" watchObservedRunningTime="2026-02-02 07:29:32.047117392 +0000 UTC m=+145.468320740" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.071000 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hs55q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.071055 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.099886 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" podStartSLOduration=125.099873975 podStartE2EDuration="2m5.099873975s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.09742351 +0000 UTC m=+145.518626858" watchObservedRunningTime="2026-02-02 07:29:32.099873975 +0000 UTC m=+145.521077323" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.101107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" event={"ID":"4f70d0fa-f17f-4743-bee7-0d3a5a728721","Type":"ContainerStarted","Data":"4182ec455d361a0155c6704317b9a2fef8efa3ace15f37292b39a6de7d9b87f8"} Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.102530 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" event={"ID":"4f70d0fa-f17f-4743-bee7-0d3a5a728721","Type":"ContainerStarted","Data":"567da43501236ef1497845dff8bac83ad4937aaf13f1bd544daed5b3c9ee2e68"} Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.103372 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-r8rf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.103447 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r8rf5" podUID="00a8df73-2822-496c-8b52-435531e7cbf7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.134387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.146024 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.646008523 +0000 UTC m=+146.067211871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.176103 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.177999 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" podStartSLOduration=125.177978407 podStartE2EDuration="2m5.177978407s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.145584032 +0000 UTC m=+145.566787380" watchObservedRunningTime="2026-02-02 07:29:32.177978407 +0000 UTC m=+145.599181755" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.202564 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztnjq" podStartSLOduration=125.202550316 podStartE2EDuration="2m5.202550316s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.202542086 +0000 UTC m=+145.623745434" watchObservedRunningTime="2026-02-02 07:29:32.202550316 +0000 UTC m=+145.623753664" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.204336 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" podStartSLOduration=125.204329073 podStartE2EDuration="2m5.204329073s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.179412005 +0000 UTC m=+145.600615353" watchObservedRunningTime="2026-02-02 07:29:32.204329073 +0000 UTC m=+145.625532421" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.226424 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d5mlp" podStartSLOduration=125.226409716 podStartE2EDuration="2m5.226409716s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.226355844 +0000 UTC m=+145.647559192" watchObservedRunningTime="2026-02-02 07:29:32.226409716 +0000 UTC m=+145.647613064" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.236953 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.237506 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.737493218 +0000 UTC m=+146.158696566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.262462 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zjbtc" podStartSLOduration=124.262422686 podStartE2EDuration="2m4.262422686s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.259480039 +0000 UTC m=+145.680683397" watchObservedRunningTime="2026-02-02 07:29:32.262422686 +0000 UTC m=+145.683626044" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.287477 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" podStartSLOduration=125.287460577 podStartE2EDuration="2m5.287460577s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.283856562 +0000 UTC m=+145.705059910" watchObservedRunningTime="2026-02-02 07:29:32.287460577 +0000 UTC m=+145.708663925" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.304207 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" podStartSLOduration=125.304190119 podStartE2EDuration="2m5.304190119s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.302643008 +0000 UTC m=+145.723846356" watchObservedRunningTime="2026-02-02 07:29:32.304190119 +0000 UTC m=+145.725393467" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.317875 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 07:24:31 +0000 UTC, rotation deadline is 2026-10-19 20:35:48.856819508 +0000 UTC Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.317903 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6229h6m16.538918417s for next certificate rotation Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.327645 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.334726 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:32 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:32 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:32 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.334809 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.338020 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" podStartSLOduration=124.337996071 podStartE2EDuration="2m4.337996071s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.336470601 +0000 UTC m=+145.757673949" watchObservedRunningTime="2026-02-02 07:29:32.337996071 +0000 UTC m=+145.759199419" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.341039 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.341385 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.841374261 +0000 UTC m=+146.262577609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.369307 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" podStartSLOduration=124.369292998 podStartE2EDuration="2m4.369292998s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.366769331 +0000 UTC m=+145.787972679" watchObservedRunningTime="2026-02-02 07:29:32.369292998 +0000 UTC m=+145.790496346" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.399033 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" podStartSLOduration=124.399016651 podStartE2EDuration="2m4.399016651s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.396893025 +0000 UTC m=+145.818096383" watchObservedRunningTime="2026-02-02 07:29:32.399016651 +0000 UTC m=+145.820219999" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.430401 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6mcz6" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.442358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.442528 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.942503499 +0000 UTC m=+146.363706847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.442613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.443021 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:32.943006763 +0000 UTC m=+146.364210111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.459968 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" podStartSLOduration=125.45994566 podStartE2EDuration="2m5.45994566s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.420459398 +0000 UTC m=+145.841662746" watchObservedRunningTime="2026-02-02 07:29:32.45994566 +0000 UTC m=+145.881149018" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.460590 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" podStartSLOduration=125.460583627 podStartE2EDuration="2m5.460583627s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.456688784 +0000 UTC m=+145.877892132" watchObservedRunningTime="2026-02-02 07:29:32.460583627 +0000 UTC m=+145.881786975" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.534195 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" podStartSLOduration=125.53417531 podStartE2EDuration="2m5.53417531s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.496119955 +0000 UTC m=+145.917323313" watchObservedRunningTime="2026-02-02 07:29:32.53417531 +0000 UTC m=+145.955378658" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.534746 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" podStartSLOduration=125.534738214 podStartE2EDuration="2m5.534738214s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.531601222 +0000 UTC m=+145.952804600" watchObservedRunningTime="2026-02-02 07:29:32.534738214 +0000 UTC m=+145.955941582" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.544594 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.544949 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.044934184 +0000 UTC m=+146.466137522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.577040 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wbkch" podStartSLOduration=6.577024431 podStartE2EDuration="6.577024431s" podCreationTimestamp="2026-02-02 07:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:32.576521128 +0000 UTC m=+145.997724476" watchObservedRunningTime="2026-02-02 07:29:32.577024431 +0000 UTC m=+145.998227779" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.646079 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.646513 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.146493485 +0000 UTC m=+146.567696923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.693626 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.748423 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.748605 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.24857972 +0000 UTC m=+146.669783068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.748890 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.749436 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.249400391 +0000 UTC m=+146.670603739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.850199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.850609 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.350588503 +0000 UTC m=+146.771791851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:32 crc kubenswrapper[4730]: I0202 07:29:32.952084 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:32 crc kubenswrapper[4730]: E0202 07:29:32.952418 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.452403511 +0000 UTC m=+146.873606859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.053028 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.053181 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.55314584 +0000 UTC m=+146.974349188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.053330 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.053630 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.553616463 +0000 UTC m=+146.974819811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.106348 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" event={"ID":"4741fab0-5de7-4ba2-af2a-ca79c0de10d6","Type":"ContainerStarted","Data":"08c6428203505b6d00c4b6fbe32df16ef99f59b911593eac8fe942547cb67807"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.106404 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" event={"ID":"4741fab0-5de7-4ba2-af2a-ca79c0de10d6","Type":"ContainerStarted","Data":"eda88293b7feee50f6b05f5d511e02a9e919ee80f67a28e2ab9d8a4855004811"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.107660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" event={"ID":"917fb6df-688b-4f0e-98eb-4bb26b37f6f8","Type":"ContainerStarted","Data":"c28261728e69767033d8b1b1c257405bf63e044ba07bc06fe5d57c1fbc9766f7"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.109028 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z57tj" event={"ID":"fa4b00a1-284e-4e4f-94ba-bb0d5e6ebba5","Type":"ContainerStarted","Data":"6a15f03249791c458bbec8d92cfd48c0cef14824603f53a98f2f4a311cceeee3"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.110132 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wxm94" event={"ID":"ed1e5ae6-01f3-4f17-a3c3-1f4c03c54d6b","Type":"ContainerStarted","Data":"c76f9ef85f5eb14859da38051397abbc9a1aba0931cd6476389786c3e57f8d67"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.111282 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" event={"ID":"12f62344-8d04-4340-a671-8f0e49012692","Type":"ContainerStarted","Data":"b67670d631e9e259b1802907c666c8f24b8554ad881319583e8ae36a0e9a8054"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.113066 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2r4hv" event={"ID":"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd","Type":"ContainerStarted","Data":"55e28466d81430e74a2d4ba077998fca37abc813cf2cfd1e22a27c5f84652a23"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.113092 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2r4hv" event={"ID":"4d819ddc-33bc-49f6-8f94-d6d4ad8254bd","Type":"ContainerStarted","Data":"7ee6684f78fe57f82b583af97c00213d5c544872d73a72a97a0a9f52f36b8e18"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.113643 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.114713 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mgpdt" event={"ID":"dcac256f-ab91-4849-b215-dcf74506d0d2","Type":"ContainerStarted","Data":"cdba9a7a0227312db600ec99844c3059c7356108778ddcd7a196418d887ba7b9"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.116179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nncxk" event={"ID":"49b610d0-f480-4bb9-80eb-919d3301ffd4","Type":"ContainerStarted","Data":"44b06ec16358d3d67849d8688090a46c72b7302bc49321aa19ea943e1a986692"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.117574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h5h45" event={"ID":"c3b296ea-7ddf-4d5e-aec2-98ddca56f04b","Type":"ContainerStarted","Data":"fc5ffd583df45f1fedbca5b44ba4e7b0490cce91ab6526b22cd9c5c43f3cfadb"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.120009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" event={"ID":"8e988b50-280e-49d0-b7d2-ae606685dc16","Type":"ContainerStarted","Data":"a54421d8e8afc8d2f70060d6d3db95395196db1ad680e25569d5ce880bc361db"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.123773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4mrs" event={"ID":"7bceaae1-36db-4899-bd71-6eba4448c9dd","Type":"ContainerStarted","Data":"221e6f6903d738a0f32237dd6154b86d7f88b3e6e89c143af7f90336757591f4"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.139956 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" event={"ID":"85e25636-c407-451f-8176-f15ca7097a97","Type":"ContainerStarted","Data":"6e7753f0ff2f3315d4276170910d02c27b3ef17e4fa70b44ce31c5276692eceb"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.140196 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.141826 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" event={"ID":"180061e1-4a0a-4a44-b6b0-5e38c20d4427","Type":"ContainerStarted","Data":"27b5e1afeedca940c49369ca2b603b4b6565cfe490d8153e00208aa5122871f5"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.141977 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.143378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" event={"ID":"bb8e6c76-96fc-4cac-b3e5-98227cddfb06","Type":"ContainerStarted","Data":"d6a9bad7c5a81ecf81d07bd18225a589893107fdefad4b8a99c73783b6a7df8f"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.144459 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" event={"ID":"da86c860-a495-4d5f-8084-32c64a497e52","Type":"ContainerStarted","Data":"0b623e0efc26e9e2eff8fd7bfdab447d539a2a7e1394d951faf76b5be228baa1"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.146315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wbkch" event={"ID":"7570e3fa-740c-4ea4-acb2-c61838123083","Type":"ContainerStarted","Data":"dd5f0a1a899f8bcc93e283e403991870f2656fc16b39fc559b17a0b905e47c70"} Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.147329 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hs55q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.147363 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.147790 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pk5cc" podStartSLOduration=126.147771658 podStartE2EDuration="2m6.147771658s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.145206361 +0000 UTC m=+146.566409709" watchObservedRunningTime="2026-02-02 07:29:33.147771658 +0000 UTC m=+146.568975006" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.154558 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.154735 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.654712802 +0000 UTC m=+147.075916150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.154958 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.156261 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.656250342 +0000 UTC m=+147.077453690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.158059 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bmv4c" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.170080 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" podStartSLOduration=126.170064637 podStartE2EDuration="2m6.170064637s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.167616552 +0000 UTC m=+146.588819910" watchObservedRunningTime="2026-02-02 07:29:33.170064637 +0000 UTC m=+146.591267985" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.176297 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b5dpt" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.198281 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" podStartSLOduration=125.198263581 podStartE2EDuration="2m5.198263581s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.196594957 +0000 UTC m=+146.617798315" watchObservedRunningTime="2026-02-02 07:29:33.198263581 +0000 UTC m=+146.619466939" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.242190 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nvzsg" podStartSLOduration=126.24214743 podStartE2EDuration="2m6.24214743s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.23910606 +0000 UTC m=+146.660309408" watchObservedRunningTime="2026-02-02 07:29:33.24214743 +0000 UTC m=+146.663350788" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.256999 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.259718 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.759698953 +0000 UTC m=+147.180902301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.349350 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:33 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:33 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:33 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.349401 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.361887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.362311 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.862296302 +0000 UTC m=+147.283499650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.411246 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.411573 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.417311 4730 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gnl7r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.417373 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" podUID="8e988b50-280e-49d0-b7d2-ae606685dc16" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.468302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.468632 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:33.968616659 +0000 UTC m=+147.389820007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.474022 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" podStartSLOduration=125.474003591 podStartE2EDuration="2m5.474003591s" podCreationTimestamp="2026-02-02 07:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.473547379 +0000 UTC m=+146.894750737" watchObservedRunningTime="2026-02-02 07:29:33.474003591 +0000 UTC m=+146.895206939" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.474583 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2r4hv" podStartSLOduration=7.474576936 podStartE2EDuration="7.474576936s" podCreationTimestamp="2026-02-02 07:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:33.366930514 +0000 UTC m=+146.788133862" watchObservedRunningTime="2026-02-02 07:29:33.474576936 +0000 UTC m=+146.895780284" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.569540 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.569931 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.069914733 +0000 UTC m=+147.491118071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.629276 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.629545 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.670556 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.670877 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.170860558 +0000 UTC m=+147.592063906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.773893 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.774248 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.274235867 +0000 UTC m=+147.695439215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.875182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.875416 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.375389668 +0000 UTC m=+147.796593016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.875467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.875811 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.375799828 +0000 UTC m=+147.797003176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:33 crc kubenswrapper[4730]: I0202 07:29:33.976124 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:33 crc kubenswrapper[4730]: E0202 07:29:33.976526 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.476503857 +0000 UTC m=+147.897707205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.077598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.077947 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.577931465 +0000 UTC m=+147.999134813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.147796 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jg5m8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.147843 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" podUID="95763bc4-bfd7-4afe-8a38-22770288a195" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.151806 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" event={"ID":"12f62344-8d04-4340-a671-8f0e49012692","Type":"ContainerStarted","Data":"9b721bc9c6edd021049196533c7de1a850d8eaa9f643c3f0666207e2203da88e"} Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.159125 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hs55q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.159419 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.178537 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.178717 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.678692085 +0000 UTC m=+148.099895423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.178952 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.179294 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.67927613 +0000 UTC m=+148.100479478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.274385 4730 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-t46v8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]log ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]etcd ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 07:29:34 crc kubenswrapper[4730]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/max-in-flight-filter ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 02 07:29:34 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 02 07:29:34 crc kubenswrapper[4730]: livez check failed Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.274445 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" podUID="bb8e6c76-96fc-4cac-b3e5-98227cddfb06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.280084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.280225 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.780204165 +0000 UTC m=+148.201407513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.280526 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.283849 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.783832921 +0000 UTC m=+148.205036269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.340561 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:34 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:34 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:34 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.340619 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.381784 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.381969 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.881945531 +0000 UTC m=+148.303148879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.382971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.383340 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.883332987 +0000 UTC m=+148.304536335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.484331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.484468 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.984450757 +0000 UTC m=+148.405654105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.484835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.485198 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:34.985181446 +0000 UTC m=+148.406384794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.513959 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.515053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.527891 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.538329 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.585903 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.586238 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.086210333 +0000 UTC m=+148.507413681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.586266 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.586289 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbn62\" (UniqueName: \"kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.586315 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.586499 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.586837 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.08682662 +0000 UTC m=+148.508029968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.635078 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m55dz" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.687470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.687641 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.687664 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbn62\" (UniqueName: \"kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.687685 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.688006 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.18797277 +0000 UTC m=+148.609176118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.688221 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.688416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.717952 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.718886 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.723806 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.745272 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbn62\" (UniqueName: \"kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62\") pod \"certified-operators-sk8kx\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.755073 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.788909 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.788964 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.788985 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvqst\" (UniqueName: \"kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.789036 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.789555 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.289540681 +0000 UTC m=+148.710744019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.828666 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.889636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.889769 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.389748267 +0000 UTC m=+148.810951615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.889925 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.889955 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.889976 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvqst\" (UniqueName: \"kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.890026 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.890307 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.390292541 +0000 UTC m=+148.811495889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.890427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.890450 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.909515 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.918136 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.939110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvqst\" (UniqueName: \"kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst\") pod \"community-operators-tpd9l\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.967822 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.991084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.991540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm88h\" (UniqueName: \"kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.991605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:34 crc kubenswrapper[4730]: I0202 07:29:34.991670 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:34 crc kubenswrapper[4730]: E0202 07:29:34.991812 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.491793051 +0000 UTC m=+148.912996389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.030457 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.047304 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jg5m8" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.094551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.094819 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.094963 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm88h\" (UniqueName: \"kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.095037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.095365 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.595351415 +0000 UTC m=+149.016554763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.095913 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.096253 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.122636 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.123623 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.131535 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.134024 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm88h\" (UniqueName: \"kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h\") pod \"certified-operators-xz9bh\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.194227 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" event={"ID":"12f62344-8d04-4340-a671-8f0e49012692","Type":"ContainerStarted","Data":"a8882d7a2eedcdbf198271ada1cfb895144e4da6ed3be3ad474fb7a94a2e16e5"} Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.194259 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" event={"ID":"12f62344-8d04-4340-a671-8f0e49012692","Type":"ContainerStarted","Data":"aa4e5ecb59714e26c703f9ab18d1d5ed7f25844e4abea09e03e5cd681f6584c0"} Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.195700 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196002 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196040 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6gj\" (UniqueName: \"kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196234 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.196270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.196941 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.696927686 +0000 UTC m=+149.118131034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.199144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.202870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.205083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.210772 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.227440 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.300809 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6gj\" (UniqueName: \"kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.300915 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.301665 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.302033 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.302640 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.802626657 +0000 UTC m=+149.223830005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.303051 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.304774 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.315746 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x6l8n" podStartSLOduration=9.315725993 podStartE2EDuration="9.315725993s" podCreationTimestamp="2026-02-02 07:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:35.229962369 +0000 UTC m=+148.651165717" watchObservedRunningTime="2026-02-02 07:29:35.315725993 +0000 UTC m=+148.736929351" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.317321 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.317732 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.336376 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6gj\" (UniqueName: \"kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj\") pod \"community-operators-v2qjt\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: W0202 07:29:35.360477 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5534cdef_89e0_4d1c_b5b6_24a739696063.slice/crio-cc8786193182394ec3456353e4461d10f85c687a180d8ac5b5621d5291746ede WatchSource:0}: Error finding container cc8786193182394ec3456353e4461d10f85c687a180d8ac5b5621d5291746ede: Status 404 returned error can't find the container with id cc8786193182394ec3456353e4461d10f85c687a180d8ac5b5621d5291746ede Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.383010 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:35 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:35 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:35 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.383061 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.407726 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.408182 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:35.908139952 +0000 UTC m=+149.329343300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.471855 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.477277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.506624 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.509540 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.511409 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.011391378 +0000 UTC m=+149.432594726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.610683 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.617703 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.117665584 +0000 UTC m=+149.538868932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.654611 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.718309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.718659 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.2186472 +0000 UTC m=+149.639850548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.820323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.820654 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.320634842 +0000 UTC m=+149.741838190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.891349 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:29:35 crc kubenswrapper[4730]: I0202 07:29:35.921387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:35 crc kubenswrapper[4730]: E0202 07:29:35.921671 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.421660739 +0000 UTC m=+149.842864087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:35 crc kubenswrapper[4730]: W0202 07:29:35.996566 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1a4d2f2_e171_4f3a_b890_976343fdafc5.slice/crio-6ea8801eaa4081325fcac33a0332f8cf9aee9b70e58291eb028312cc0d4fef11 WatchSource:0}: Error finding container 6ea8801eaa4081325fcac33a0332f8cf9aee9b70e58291eb028312cc0d4fef11: Status 404 returned error can't find the container with id 6ea8801eaa4081325fcac33a0332f8cf9aee9b70e58291eb028312cc0d4fef11 Feb 02 07:29:36 crc kubenswrapper[4730]: W0202 07:29:35.999421 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-73958e5a69f14393cc45e2a68e09f17874245e768762460a8bc7c85e7725521b WatchSource:0}: Error finding container 73958e5a69f14393cc45e2a68e09f17874245e768762460a8bc7c85e7725521b: Status 404 returned error can't find the container with id 73958e5a69f14393cc45e2a68e09f17874245e768762460a8bc7c85e7725521b Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.022694 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:36 crc kubenswrapper[4730]: E0202 07:29:36.023141 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.523122807 +0000 UTC m=+149.944326155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.124593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: E0202 07:29:36.124952 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.624939275 +0000 UTC m=+150.046142623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.140403 4730 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.199549 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerStarted","Data":"6ea8801eaa4081325fcac33a0332f8cf9aee9b70e58291eb028312cc0d4fef11"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.200964 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7037b7f75907469b2441dc31897c8aac1df831301a09e667d6816cc48f7ff2a4"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.200986 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"73958e5a69f14393cc45e2a68e09f17874245e768762460a8bc7c85e7725521b"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.202302 4730 generic.go:334] "Generic (PLEG): container finished" podID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerID="b489e4d3882d746b4324fbdf0cc006b88f97ccd18650e2f13505f50e4434695b" exitCode=0 Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.202349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerDied","Data":"b489e4d3882d746b4324fbdf0cc006b88f97ccd18650e2f13505f50e4434695b"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.202363 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerStarted","Data":"cc8786193182394ec3456353e4461d10f85c687a180d8ac5b5621d5291746ede"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.204801 4730 generic.go:334] "Generic (PLEG): container finished" podID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerID="d3262ce64287acf1d323030d25c834e80386a99a74488ea47de8ee2b5829a942" exitCode=0 Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.204856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerDied","Data":"d3262ce64287acf1d323030d25c834e80386a99a74488ea47de8ee2b5829a942"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.204874 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerStarted","Data":"7a41ca262f5659554a1541527f74a5170ef089890f42b489ac0c93d2819eb0e4"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.205194 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.206512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f01ebc00bb72b7db671b2c9f6df92b65f9aecd75135a67dd3c038d990f764bdf"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.206540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"58e46663aae29a3b9131435d8b2c4df5c3e047c8f7f2f048d1ecf9603e681418"} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.225738 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:36 crc kubenswrapper[4730]: E0202 07:29:36.225934 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.72590951 +0000 UTC m=+150.147112908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.308629 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.326927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: E0202 07:29:36.328684 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.828669733 +0000 UTC m=+150.249873081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xmtgb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.334589 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:36 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:36 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:36 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.334623 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.427747 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:36 crc kubenswrapper[4730]: E0202 07:29:36.428138 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 07:29:36.928105038 +0000 UTC m=+150.349308386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.429002 4730 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T07:29:36.140427774Z","Handler":null,"Name":""} Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.432228 4730 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.432270 4730 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.529048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.531867 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.531914 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.553274 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xmtgb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.630804 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.650881 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.845012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.908834 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.909799 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.912731 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.924258 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.935389 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.935547 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sng6r\" (UniqueName: \"kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:36 crc kubenswrapper[4730]: I0202 07:29:36.935663 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.041660 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.042037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sng6r\" (UniqueName: \"kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.042079 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.042688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.042818 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.068055 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sng6r\" (UniqueName: \"kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r\") pod \"redhat-marketplace-prb8b\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.086329 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:29:37 crc kubenswrapper[4730]: W0202 07:29:37.105756 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55eaa89c_c6a4_4c2c_8218_f4235cdcc6fb.slice/crio-3a7399d1f5a19a35fe75d97efe9a9b4b94d71ce099bffa61c589c7902ec40007 WatchSource:0}: Error finding container 3a7399d1f5a19a35fe75d97efe9a9b4b94d71ce099bffa61c589c7902ec40007: Status 404 returned error can't find the container with id 3a7399d1f5a19a35fe75d97efe9a9b4b94d71ce099bffa61c589c7902ec40007 Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.231362 4730 generic.go:334] "Generic (PLEG): container finished" podID="917fb6df-688b-4f0e-98eb-4bb26b37f6f8" containerID="c28261728e69767033d8b1b1c257405bf63e044ba07bc06fe5d57c1fbc9766f7" exitCode=0 Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.231512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" event={"ID":"917fb6df-688b-4f0e-98eb-4bb26b37f6f8","Type":"ContainerDied","Data":"c28261728e69767033d8b1b1c257405bf63e044ba07bc06fe5d57c1fbc9766f7"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.240639 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.242561 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" event={"ID":"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb","Type":"ContainerStarted","Data":"3a7399d1f5a19a35fe75d97efe9a9b4b94d71ce099bffa61c589c7902ec40007"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.242645 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.248449 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"55f250860fff0424850366c6dc3f9fdbfbb650633b7cc8ed8938c9e09c90b822"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.248492 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5fbf6e2c03a5b09bc293b5360fc152bd3bce2dff6f161d81dedd12143ee9c9f7"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.250597 4730 generic.go:334] "Generic (PLEG): container finished" podID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerID="50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921" exitCode=0 Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.250702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerDied","Data":"50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.250731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerStarted","Data":"c56decfe7f5a44fb09b2b76554498d65c9a50bfbf4c93fd82915ec79452f1370"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.253118 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerID="7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438" exitCode=0 Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.262959 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.265061 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" podStartSLOduration=130.265049734 podStartE2EDuration="2m10.265049734s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:37.263686138 +0000 UTC m=+150.684889476" watchObservedRunningTime="2026-02-02 07:29:37.265049734 +0000 UTC m=+150.686253082" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.269043 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.269481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerDied","Data":"7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438"} Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.311606 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.313216 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.319057 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.331352 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:37 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:37 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:37 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.331413 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.346291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcxh\" (UniqueName: \"kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.346410 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.346463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.447595 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcxh\" (UniqueName: \"kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.447994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.448041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.448620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.448840 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.482128 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcxh\" (UniqueName: \"kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh\") pod \"redhat-marketplace-jmqk6\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.492433 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.493232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.496826 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.506142 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.506888 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.521651 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:29:37 crc kubenswrapper[4730]: W0202 07:29:37.538941 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcac6c492_5297_4467_b15b_d211bd932d9e.slice/crio-7baba3f602960c15d3d1b1b3a4c6479032f1f090c2bd7f769d71fe8bd67b01e1 WatchSource:0}: Error finding container 7baba3f602960c15d3d1b1b3a4c6479032f1f090c2bd7f769d71fe8bd67b01e1: Status 404 returned error can't find the container with id 7baba3f602960c15d3d1b1b3a4c6479032f1f090c2bd7f769d71fe8bd67b01e1 Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.554268 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.554412 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.650863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.655511 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.655566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.655648 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.672429 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.703035 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.704350 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.712734 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.714249 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.757136 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfqd\" (UniqueName: \"kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.757239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.757306 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.828437 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.861392 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvfqd\" (UniqueName: \"kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.861434 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.861479 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.862028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.862693 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.882715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvfqd\" (UniqueName: \"kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd\") pod \"redhat-operators-svn5k\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.903691 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.905292 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.933933 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.962594 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmjn8\" (UniqueName: \"kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.963444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:37 crc kubenswrapper[4730]: I0202 07:29:37.963712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.025773 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.065218 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmjn8\" (UniqueName: \"kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.065294 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.065388 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.066098 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.067359 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.087891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmjn8\" (UniqueName: \"kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8\") pod \"redhat-operators-4cd5n\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.123985 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:29:38 crc kubenswrapper[4730]: W0202 07:29:38.158938 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb188db6_3952_4aa4_a29a_d92911e5f1e1.slice/crio-ca792a526ddafd01b425297df97a85623113cf0ff87710cc55f1d842ba81a64c WatchSource:0}: Error finding container ca792a526ddafd01b425297df97a85623113cf0ff87710cc55f1d842ba81a64c: Status 404 returned error can't find the container with id ca792a526ddafd01b425297df97a85623113cf0ff87710cc55f1d842ba81a64c Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.236816 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.304519 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" event={"ID":"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb","Type":"ContainerStarted","Data":"acddef8fe01493bff8843f00235f021ba11b0b94ada3d47b6a0878ff2817b501"} Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.307306 4730 generic.go:334] "Generic (PLEG): container finished" podID="cac6c492-5297-4467-b15b-d211bd932d9e" containerID="27d4de6af5a44a04f26174361457db46414a6e2ff3ff114b386987d51da8449a" exitCode=0 Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.307375 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerDied","Data":"27d4de6af5a44a04f26174361457db46414a6e2ff3ff114b386987d51da8449a"} Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.307400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerStarted","Data":"7baba3f602960c15d3d1b1b3a4c6479032f1f090c2bd7f769d71fe8bd67b01e1"} Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.309653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerStarted","Data":"ca792a526ddafd01b425297df97a85623113cf0ff87710cc55f1d842ba81a64c"} Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.338954 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:38 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:38 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:38 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.339016 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.346593 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-r8rf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.346645 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r8rf5" podUID="00a8df73-2822-496c-8b52-435531e7cbf7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.346893 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-r8rf5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.346930 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r8rf5" podUID="00a8df73-2822-496c-8b52-435531e7cbf7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.423776 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.429679 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.435651 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gnl7r" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.602100 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.636677 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.646367 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t46v8" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.655359 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.677788 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume\") pod \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.677896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fdm\" (UniqueName: \"kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm\") pod \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.677942 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume\") pod \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\" (UID: \"917fb6df-688b-4f0e-98eb-4bb26b37f6f8\") " Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.678940 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "917fb6df-688b-4f0e-98eb-4bb26b37f6f8" (UID: "917fb6df-688b-4f0e-98eb-4bb26b37f6f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.679297 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.690405 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "917fb6df-688b-4f0e-98eb-4bb26b37f6f8" (UID: "917fb6df-688b-4f0e-98eb-4bb26b37f6f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.695659 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm" (OuterVolumeSpecName: "kube-api-access-s9fdm") pod "917fb6df-688b-4f0e-98eb-4bb26b37f6f8" (UID: "917fb6df-688b-4f0e-98eb-4bb26b37f6f8"). InnerVolumeSpecName "kube-api-access-s9fdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.780815 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fdm\" (UniqueName: \"kubernetes.io/projected/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-kube-api-access-s9fdm\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.780845 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/917fb6df-688b-4f0e-98eb-4bb26b37f6f8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:38 crc kubenswrapper[4730]: I0202 07:29:38.883863 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:29:38 crc kubenswrapper[4730]: W0202 07:29:38.920908 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod876e37b2_1950_4143_b730_eb121a64a0a8.slice/crio-d4b113c0cf6a0a0caa4fe40a023e70a871cc7d28caacbe7ad5755603e5fb6da7 WatchSource:0}: Error finding container d4b113c0cf6a0a0caa4fe40a023e70a871cc7d28caacbe7ad5755603e5fb6da7: Status 404 returned error can't find the container with id d4b113c0cf6a0a0caa4fe40a023e70a871cc7d28caacbe7ad5755603e5fb6da7 Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.021423 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.021462 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.023552 4730 patch_prober.go:28] interesting pod/console-f9d7485db-tqvrx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.023586 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tqvrx" podUID="4994d05d-dfe5-42e1-81e9-8b4a09fb8934" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.328596 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.330356 4730 generic.go:334] "Generic (PLEG): container finished" podID="876e37b2-1950-4143-b730-eb121a64a0a8" containerID="3bc7e0f9a472669cdd38ea228545b8816e82bdb1d8d2bcaf657b559b11cf2209" exitCode=0 Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.330429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerDied","Data":"3bc7e0f9a472669cdd38ea228545b8816e82bdb1d8d2bcaf657b559b11cf2209"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.330454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerStarted","Data":"d4b113c0cf6a0a0caa4fe40a023e70a871cc7d28caacbe7ad5755603e5fb6da7"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.334467 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:39 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:39 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:39 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.334500 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.335560 4730 generic.go:334] "Generic (PLEG): container finished" podID="17c5db75-0318-476c-aab3-8ddab8adb360" containerID="273b11316d7461962f673134b16652126db15cd4717b3a966c5418c7e86bb713" exitCode=0 Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.335616 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerDied","Data":"273b11316d7461962f673134b16652126db15cd4717b3a966c5418c7e86bb713"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.335636 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerStarted","Data":"0cf79a1f105537f6d0d490dbc6b2c739c84d2dd20ce724d1b044fa73278e8108"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.341460 4730 generic.go:334] "Generic (PLEG): container finished" podID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerID="38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f" exitCode=0 Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.341521 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerDied","Data":"38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.350076 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61","Type":"ContainerStarted","Data":"5599403a4cd5acd45f0fb609cca0c2c1844a78964b9a281ee0ee21d3e3c8a43e"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.350104 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61","Type":"ContainerStarted","Data":"337476ca33649cc0e39154bb809da8913d10b0712daaab91c0bd458025b3c180"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.355835 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.355783 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-7fz75" event={"ID":"917fb6df-688b-4f0e-98eb-4bb26b37f6f8","Type":"ContainerDied","Data":"f818268693b1f2cb599fe1e3b493ace334ce372800b8b68d1ce6c9a726e1d5e8"} Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.356892 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f818268693b1f2cb599fe1e3b493ace334ce372800b8b68d1ce6c9a726e1d5e8" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.413514 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.413495883 podStartE2EDuration="2.413495883s" podCreationTimestamp="2026-02-02 07:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:29:39.404395712 +0000 UTC m=+152.825599060" watchObservedRunningTime="2026-02-02 07:29:39.413495883 +0000 UTC m=+152.834699231" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.413741 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 07:29:39 crc kubenswrapper[4730]: E0202 07:29:39.413931 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917fb6df-688b-4f0e-98eb-4bb26b37f6f8" containerName="collect-profiles" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.413942 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="917fb6df-688b-4f0e-98eb-4bb26b37f6f8" containerName="collect-profiles" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.414036 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="917fb6df-688b-4f0e-98eb-4bb26b37f6f8" containerName="collect-profiles" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.414316 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.414388 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.415939 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.416250 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.515513 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.596745 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.596814 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.698724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.698961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.699039 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.749122 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.767528 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:39 crc kubenswrapper[4730]: I0202 07:29:39.977799 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 07:29:40 crc kubenswrapper[4730]: I0202 07:29:40.331383 4730 patch_prober.go:28] interesting pod/router-default-5444994796-229kn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 07:29:40 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 02 07:29:40 crc kubenswrapper[4730]: [+]process-running ok Feb 02 07:29:40 crc kubenswrapper[4730]: healthz check failed Feb 02 07:29:40 crc kubenswrapper[4730]: I0202 07:29:40.331656 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-229kn" podUID="ccdd934a-e3ae-459f-b8a6-20349fae2c4d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 07:29:40 crc kubenswrapper[4730]: I0202 07:29:40.376751 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0619613e-d48d-48ae-b6cd-b5a2187e4d64","Type":"ContainerStarted","Data":"e260d6f56d6b81a4fe31d9914a4e511bcfda285847efed241a324e2d6e3bcb6d"} Feb 02 07:29:40 crc kubenswrapper[4730]: I0202 07:29:40.379608 4730 generic.go:334] "Generic (PLEG): container finished" podID="6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" containerID="5599403a4cd5acd45f0fb609cca0c2c1844a78964b9a281ee0ee21d3e3c8a43e" exitCode=0 Feb 02 07:29:40 crc kubenswrapper[4730]: I0202 07:29:40.379665 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61","Type":"ContainerDied","Data":"5599403a4cd5acd45f0fb609cca0c2c1844a78964b9a281ee0ee21d3e3c8a43e"} Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.330739 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.335404 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-229kn" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.424007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0619613e-d48d-48ae-b6cd-b5a2187e4d64","Type":"ContainerStarted","Data":"a069114424f8e15dd0e626dcf5a2ed282f32cb0dc5ea22a996c9ca1cb9d071de"} Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.537665 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2r4hv" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.707947 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.738008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir\") pod \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.738121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access\") pod \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\" (UID: \"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61\") " Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.738242 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" (UID: "6c7dcdcd-8f0e-4816-9253-9bac12ea3e61"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.738567 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.747250 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" (UID: "6c7dcdcd-8f0e-4816-9253-9bac12ea3e61"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:29:41 crc kubenswrapper[4730]: I0202 07:29:41.839672 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c7dcdcd-8f0e-4816-9253-9bac12ea3e61-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:42 crc kubenswrapper[4730]: I0202 07:29:42.434371 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 07:29:42 crc kubenswrapper[4730]: I0202 07:29:42.434397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c7dcdcd-8f0e-4816-9253-9bac12ea3e61","Type":"ContainerDied","Data":"337476ca33649cc0e39154bb809da8913d10b0712daaab91c0bd458025b3c180"} Feb 02 07:29:42 crc kubenswrapper[4730]: I0202 07:29:42.434506 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337476ca33649cc0e39154bb809da8913d10b0712daaab91c0bd458025b3c180" Feb 02 07:29:42 crc kubenswrapper[4730]: I0202 07:29:42.436364 4730 generic.go:334] "Generic (PLEG): container finished" podID="0619613e-d48d-48ae-b6cd-b5a2187e4d64" containerID="a069114424f8e15dd0e626dcf5a2ed282f32cb0dc5ea22a996c9ca1cb9d071de" exitCode=0 Feb 02 07:29:42 crc kubenswrapper[4730]: I0202 07:29:42.436424 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0619613e-d48d-48ae-b6cd-b5a2187e4d64","Type":"ContainerDied","Data":"a069114424f8e15dd0e626dcf5a2ed282f32cb0dc5ea22a996c9ca1cb9d071de"} Feb 02 07:29:48 crc kubenswrapper[4730]: I0202 07:29:48.351064 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-r8rf5" Feb 02 07:29:49 crc kubenswrapper[4730]: I0202 07:29:49.025112 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:49 crc kubenswrapper[4730]: I0202 07:29:49.032418 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tqvrx" Feb 02 07:29:49 crc kubenswrapper[4730]: I0202 07:29:49.856853 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:49 crc kubenswrapper[4730]: I0202 07:29:49.862049 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc-metrics-certs\") pod \"network-metrics-daemon-xrjth\" (UID: \"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc\") " pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:49 crc kubenswrapper[4730]: I0202 07:29:49.889862 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrjth" Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.755958 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.869315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access\") pod \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.869394 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir\") pod \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\" (UID: \"0619613e-d48d-48ae-b6cd-b5a2187e4d64\") " Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.869651 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0619613e-d48d-48ae-b6cd-b5a2187e4d64" (UID: "0619613e-d48d-48ae-b6cd-b5a2187e4d64"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.869831 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.872953 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0619613e-d48d-48ae-b6cd-b5a2187e4d64" (UID: "0619613e-d48d-48ae-b6cd-b5a2187e4d64"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:29:50 crc kubenswrapper[4730]: I0202 07:29:50.971623 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0619613e-d48d-48ae-b6cd-b5a2187e4d64-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.486330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0619613e-d48d-48ae-b6cd-b5a2187e4d64","Type":"ContainerDied","Data":"e260d6f56d6b81a4fe31d9914a4e511bcfda285847efed241a324e2d6e3bcb6d"} Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.486376 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e260d6f56d6b81a4fe31d9914a4e511bcfda285847efed241a324e2d6e3bcb6d" Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.486391 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.672139 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.672509 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" containerID="cri-o://d536b46800784026280cecc85439e2fe6a4b74b2b6b53b439e80f7db558e529a" gracePeriod=30 Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.688103 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:29:51 crc kubenswrapper[4730]: I0202 07:29:51.688317 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" containerID="cri-o://70029b7d662886721485f0837781f9388bc3eed936f124a10f2805a52a77f2d1" gracePeriod=30 Feb 02 07:29:54 crc kubenswrapper[4730]: I0202 07:29:54.505752 4730 generic.go:334] "Generic (PLEG): container finished" podID="01d50625-677d-463d-9439-2d7fd88fb649" containerID="d536b46800784026280cecc85439e2fe6a4b74b2b6b53b439e80f7db558e529a" exitCode=0 Feb 02 07:29:54 crc kubenswrapper[4730]: I0202 07:29:54.505894 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" event={"ID":"01d50625-677d-463d-9439-2d7fd88fb649","Type":"ContainerDied","Data":"d536b46800784026280cecc85439e2fe6a4b74b2b6b53b439e80f7db558e529a"} Feb 02 07:29:54 crc kubenswrapper[4730]: I0202 07:29:54.508733 4730 generic.go:334] "Generic (PLEG): container finished" podID="9f07ac35-374b-4f55-af36-db35361500c4" containerID="70029b7d662886721485f0837781f9388bc3eed936f124a10f2805a52a77f2d1" exitCode=0 Feb 02 07:29:54 crc kubenswrapper[4730]: I0202 07:29:54.508784 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" event={"ID":"9f07ac35-374b-4f55-af36-db35361500c4","Type":"ContainerDied","Data":"70029b7d662886721485f0837781f9388bc3eed936f124a10f2805a52a77f2d1"} Feb 02 07:29:56 crc kubenswrapper[4730]: I0202 07:29:56.849801 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:29:57 crc kubenswrapper[4730]: I0202 07:29:57.660888 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:29:57 crc kubenswrapper[4730]: I0202 07:29:57.660971 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:29:58 crc kubenswrapper[4730]: I0202 07:29:58.303437 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-djvb6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 07:29:58 crc kubenswrapper[4730]: I0202 07:29:58.303505 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 07:29:58 crc kubenswrapper[4730]: I0202 07:29:58.614566 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72nv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 02 07:29:58 crc kubenswrapper[4730]: I0202 07:29:58.614897 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.132072 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd"] Feb 02 07:30:00 crc kubenswrapper[4730]: E0202 07:30:00.133721 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0619613e-d48d-48ae-b6cd-b5a2187e4d64" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.133741 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0619613e-d48d-48ae-b6cd-b5a2187e4d64" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: E0202 07:30:00.133755 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.133791 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.133970 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0619613e-d48d-48ae-b6cd-b5a2187e4d64" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.133990 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7dcdcd-8f0e-4816-9253-9bac12ea3e61" containerName="pruner" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.134660 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.137201 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.141030 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.145283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd"] Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.192900 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.192945 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.193037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w79p\" (UniqueName: \"kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.294149 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.294230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.294290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w79p\" (UniqueName: \"kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.295261 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.302966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.320496 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w79p\" (UniqueName: \"kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p\") pod \"collect-profiles-29500290-5wlqd\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:00 crc kubenswrapper[4730]: I0202 07:30:00.456075 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.192585 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.195628 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221033 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles\") pod \"01d50625-677d-463d-9439-2d7fd88fb649\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221081 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65jd\" (UniqueName: \"kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd\") pod \"9f07ac35-374b-4f55-af36-db35361500c4\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221123 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config\") pod \"01d50625-677d-463d-9439-2d7fd88fb649\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221147 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config\") pod \"9f07ac35-374b-4f55-af36-db35361500c4\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221180 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert\") pod \"9f07ac35-374b-4f55-af36-db35361500c4\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221223 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca\") pod \"01d50625-677d-463d-9439-2d7fd88fb649\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221246 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jr2m\" (UniqueName: \"kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m\") pod \"01d50625-677d-463d-9439-2d7fd88fb649\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221265 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca\") pod \"9f07ac35-374b-4f55-af36-db35361500c4\" (UID: \"9f07ac35-374b-4f55-af36-db35361500c4\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.221301 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert\") pod \"01d50625-677d-463d-9439-2d7fd88fb649\" (UID: \"01d50625-677d-463d-9439-2d7fd88fb649\") " Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.222520 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca" (OuterVolumeSpecName: "client-ca") pod "01d50625-677d-463d-9439-2d7fd88fb649" (UID: "01d50625-677d-463d-9439-2d7fd88fb649"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.222550 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01d50625-677d-463d-9439-2d7fd88fb649" (UID: "01d50625-677d-463d-9439-2d7fd88fb649"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223087 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223223 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config" (OuterVolumeSpecName: "config") pod "01d50625-677d-463d-9439-2d7fd88fb649" (UID: "01d50625-677d-463d-9439-2d7fd88fb649"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: E0202 07:30:02.223295 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223308 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: E0202 07:30:02.223317 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223323 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223413 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f07ac35-374b-4f55-af36-db35361500c4" containerName="route-controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.223424 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d50625-677d-463d-9439-2d7fd88fb649" containerName="controller-manager" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.224058 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.224098 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config" (OuterVolumeSpecName: "config") pod "9f07ac35-374b-4f55-af36-db35361500c4" (UID: "9f07ac35-374b-4f55-af36-db35361500c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.231450 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd" (OuterVolumeSpecName: "kube-api-access-x65jd") pod "9f07ac35-374b-4f55-af36-db35361500c4" (UID: "9f07ac35-374b-4f55-af36-db35361500c4"). InnerVolumeSpecName "kube-api-access-x65jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.237540 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f07ac35-374b-4f55-af36-db35361500c4" (UID: "9f07ac35-374b-4f55-af36-db35361500c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.239716 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.248346 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01d50625-677d-463d-9439-2d7fd88fb649" (UID: "01d50625-677d-463d-9439-2d7fd88fb649"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.280550 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m" (OuterVolumeSpecName: "kube-api-access-4jr2m") pod "01d50625-677d-463d-9439-2d7fd88fb649" (UID: "01d50625-677d-463d-9439-2d7fd88fb649"). InnerVolumeSpecName "kube-api-access-4jr2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.280681 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f07ac35-374b-4f55-af36-db35361500c4" (UID: "9f07ac35-374b-4f55-af36-db35361500c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.322605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.322719 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.322756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.322818 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24n25\" (UniqueName: \"kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.322873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323153 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323188 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jr2m\" (UniqueName: \"kubernetes.io/projected/01d50625-677d-463d-9439-2d7fd88fb649-kube-api-access-4jr2m\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323200 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323210 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d50625-677d-463d-9439-2d7fd88fb649-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323243 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323255 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x65jd\" (UniqueName: \"kubernetes.io/projected/9f07ac35-374b-4f55-af36-db35361500c4-kube-api-access-x65jd\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323265 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d50625-677d-463d-9439-2d7fd88fb649-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323279 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f07ac35-374b-4f55-af36-db35361500c4-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.323289 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f07ac35-374b-4f55-af36-db35361500c4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.353081 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.382669 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrjth"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.424032 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.424089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.424112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.424206 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24n25\" (UniqueName: \"kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.424230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.425179 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.425733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.426919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.428737 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.441109 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24n25\" (UniqueName: \"kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25\") pod \"controller-manager-599cc55994-c7bz5\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.554231 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.554238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72nv7" event={"ID":"01d50625-677d-463d-9439-2d7fd88fb649","Type":"ContainerDied","Data":"a6535f7c5336bd6c81da620eca492b62442298a91ee2f243f5f39d6f596986ef"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.555026 4730 scope.go:117] "RemoveContainer" containerID="d536b46800784026280cecc85439e2fe6a4b74b2b6b53b439e80f7db558e529a" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.562173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerStarted","Data":"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.569136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjth" event={"ID":"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc","Type":"ContainerStarted","Data":"5463451d2fcc040b638886ca42a9c13a5145e0d03f924994c58d45716c395ea6"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.583090 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.594839 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.597062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" event={"ID":"135e45db-9947-4e92-9cfb-1cb0a95e13d7","Type":"ContainerStarted","Data":"522189bd93b998041d2646ea34b921a7eda031e4b7c39e9c4780f0545b77c272"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.598003 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72nv7"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.619899 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.619932 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6" event={"ID":"9f07ac35-374b-4f55-af36-db35361500c4","Type":"ContainerDied","Data":"8b7e28f6b4bc79b47a8668523ed427f8fa8fab0648bb7bdb96bfdfc48a555220"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.619971 4730 scope.go:117] "RemoveContainer" containerID="70029b7d662886721485f0837781f9388bc3eed936f124a10f2805a52a77f2d1" Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.644757 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerStarted","Data":"d3a3b6025c7ae749f4306e3c70d40c7907bf6ae524d483bfae99c483c694dbcd"} Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.680532 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.684575 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-djvb6"] Feb 02 07:30:02 crc kubenswrapper[4730]: I0202 07:30:02.925438 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:03 crc kubenswrapper[4730]: W0202 07:30:03.053205 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150f7392_de56_4c50_8408_5de0fb5829d0.slice/crio-d49f2110426a8f29a0b77a47dacd67b3c73fb4156a933a36e65b7d71c9f6d245 WatchSource:0}: Error finding container d49f2110426a8f29a0b77a47dacd67b3c73fb4156a933a36e65b7d71c9f6d245: Status 404 returned error can't find the container with id d49f2110426a8f29a0b77a47dacd67b3c73fb4156a933a36e65b7d71c9f6d245 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.267739 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d50625-677d-463d-9439-2d7fd88fb649" path="/var/lib/kubelet/pods/01d50625-677d-463d-9439-2d7fd88fb649/volumes" Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.268795 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f07ac35-374b-4f55-af36-db35361500c4" path="/var/lib/kubelet/pods/9f07ac35-374b-4f55-af36-db35361500c4/volumes" Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.651920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" event={"ID":"150f7392-de56-4c50-8408-5de0fb5829d0","Type":"ContainerStarted","Data":"048b46e9a4ff6756d9f821d437af52a46ba3d8f26f5ff3bb6fba252a58aaca29"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.652231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" event={"ID":"150f7392-de56-4c50-8408-5de0fb5829d0","Type":"ContainerStarted","Data":"d49f2110426a8f29a0b77a47dacd67b3c73fb4156a933a36e65b7d71c9f6d245"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.656318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjth" event={"ID":"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc","Type":"ContainerStarted","Data":"b8e7027f35b5353a0f227e3ad07fbe9edf50ab0bf837b9994b07d70f727abbf9"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.656348 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrjth" event={"ID":"f4d8b214-dcd2-4b9f-8031-a0ab3e731dfc","Type":"ContainerStarted","Data":"81178f50f3f512dd2d408d799276d05638140a93dcb492dfcd127aa7a9142e29"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.658534 4730 generic.go:334] "Generic (PLEG): container finished" podID="135e45db-9947-4e92-9cfb-1cb0a95e13d7" containerID="fb92d28f44ef52cff8284d018a9b4d7b7a43a672af4276eca53fa71fd5545379" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.658590 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" event={"ID":"135e45db-9947-4e92-9cfb-1cb0a95e13d7","Type":"ContainerDied","Data":"fb92d28f44ef52cff8284d018a9b4d7b7a43a672af4276eca53fa71fd5545379"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.660818 4730 generic.go:334] "Generic (PLEG): container finished" podID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerID="d7234c42790032f1de843e54745190e9e4c277ee023d76d4bf3f3bd206f96a5d" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.660859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerDied","Data":"d7234c42790032f1de843e54745190e9e4c277ee023d76d4bf3f3bd206f96a5d"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.663865 4730 generic.go:334] "Generic (PLEG): container finished" podID="876e37b2-1950-4143-b730-eb121a64a0a8" containerID="6242978f5820c15980468813c75d8ae8d311528053c971534599782a34c47731" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.663921 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerDied","Data":"6242978f5820c15980468813c75d8ae8d311528053c971534599782a34c47731"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.669238 4730 generic.go:334] "Generic (PLEG): container finished" podID="cac6c492-5297-4467-b15b-d211bd932d9e" containerID="6b885215366b21e6eea33e5302a1e2c281b8145c8b57c8da59008d29bfa6321b" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.669308 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerDied","Data":"6b885215366b21e6eea33e5302a1e2c281b8145c8b57c8da59008d29bfa6321b"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.676144 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerDied","Data":"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.675980 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerID="f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.684087 4730 generic.go:334] "Generic (PLEG): container finished" podID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerID="c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.684185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerDied","Data":"c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.691773 4730 generic.go:334] "Generic (PLEG): container finished" podID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerID="f7d2d0d90705c95146d9fc4fb9cc0d4e79bbfa38497ca31dad23d1b116cbe601" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.691844 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerDied","Data":"f7d2d0d90705c95146d9fc4fb9cc0d4e79bbfa38497ca31dad23d1b116cbe601"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.694805 4730 generic.go:334] "Generic (PLEG): container finished" podID="17c5db75-0318-476c-aab3-8ddab8adb360" containerID="d3a3b6025c7ae749f4306e3c70d40c7907bf6ae524d483bfae99c483c694dbcd" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.694865 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerDied","Data":"d3a3b6025c7ae749f4306e3c70d40c7907bf6ae524d483bfae99c483c694dbcd"} Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.699270 4730 generic.go:334] "Generic (PLEG): container finished" podID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerID="4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73" exitCode=0 Feb 02 07:30:03 crc kubenswrapper[4730]: I0202 07:30:03.699311 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerDied","Data":"4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73"} Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.724417 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" podStartSLOduration=13.724401393 podStartE2EDuration="13.724401393s" podCreationTimestamp="2026-02-02 07:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:04.722283027 +0000 UTC m=+178.143486395" watchObservedRunningTime="2026-02-02 07:30:04.724401393 +0000 UTC m=+178.145604741" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.739654 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xrjth" podStartSLOduration=157.739631365 podStartE2EDuration="2m37.739631365s" podCreationTimestamp="2026-02-02 07:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:04.734037018 +0000 UTC m=+178.155240376" watchObservedRunningTime="2026-02-02 07:30:04.739631365 +0000 UTC m=+178.160834713" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.829232 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.830861 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.841661 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.842283 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.842681 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.842932 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.843151 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.843708 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.852016 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.975479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgj58\" (UniqueName: \"kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.975522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.975634 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:04 crc kubenswrapper[4730]: I0202 07:30:04.975793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.004043 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume\") pod \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076363 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w79p\" (UniqueName: \"kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p\") pod \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076383 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume\") pod \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\" (UID: \"135e45db-9947-4e92-9cfb-1cb0a95e13d7\") " Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076513 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgj58\" (UniqueName: \"kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.076559 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.077027 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "135e45db-9947-4e92-9cfb-1cb0a95e13d7" (UID: "135e45db-9947-4e92-9cfb-1cb0a95e13d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.078112 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.078268 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.081883 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p" (OuterVolumeSpecName: "kube-api-access-7w79p") pod "135e45db-9947-4e92-9cfb-1cb0a95e13d7" (UID: "135e45db-9947-4e92-9cfb-1cb0a95e13d7"). InnerVolumeSpecName "kube-api-access-7w79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.081935 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "135e45db-9947-4e92-9cfb-1cb0a95e13d7" (UID: "135e45db-9947-4e92-9cfb-1cb0a95e13d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.082139 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.099881 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgj58\" (UniqueName: \"kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58\") pod \"route-controller-manager-55fc5cf6f8-b7w2t\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.177353 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/135e45db-9947-4e92-9cfb-1cb0a95e13d7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.177391 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w79p\" (UniqueName: \"kubernetes.io/projected/135e45db-9947-4e92-9cfb-1cb0a95e13d7-kube-api-access-7w79p\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.177400 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135e45db-9947-4e92-9cfb-1cb0a95e13d7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.212290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.713018 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerStarted","Data":"15ee120a70ddfa2e03c90f7e104a948a13e4338fea2d8b1f05bc2aaad1a66305"} Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.714823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" event={"ID":"135e45db-9947-4e92-9cfb-1cb0a95e13d7","Type":"ContainerDied","Data":"522189bd93b998041d2646ea34b921a7eda031e4b7c39e9c4780f0545b77c272"} Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.714850 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522189bd93b998041d2646ea34b921a7eda031e4b7c39e9c4780f0545b77c272" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.714897 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-5wlqd" Feb 02 07:30:05 crc kubenswrapper[4730]: I0202 07:30:05.739098 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tpd9l" podStartSLOduration=2.986970653 podStartE2EDuration="31.739077071s" podCreationTimestamp="2026-02-02 07:29:34 +0000 UTC" firstStartedPulling="2026-02-02 07:29:36.207784672 +0000 UTC m=+149.628988020" lastFinishedPulling="2026-02-02 07:30:04.95989109 +0000 UTC m=+178.381094438" observedRunningTime="2026-02-02 07:30:05.734951112 +0000 UTC m=+179.156154450" watchObservedRunningTime="2026-02-02 07:30:05.739077071 +0000 UTC m=+179.160280419" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.068564 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:06 crc kubenswrapper[4730]: W0202 07:30:06.078005 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dcaabb2_a081_47f9_ad89_a6e4ee57efc9.slice/crio-02f16854f3d5b92c0d592095d3523ea9d212b8c130fcd47db49f5de3be59cd8a WatchSource:0}: Error finding container 02f16854f3d5b92c0d592095d3523ea9d212b8c130fcd47db49f5de3be59cd8a: Status 404 returned error can't find the container with id 02f16854f3d5b92c0d592095d3523ea9d212b8c130fcd47db49f5de3be59cd8a Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.720980 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" event={"ID":"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9","Type":"ContainerStarted","Data":"06beedc014a606dedff648463baec219d5eaf75ed95f1747b47487d2f30a9c8a"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.721031 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" event={"ID":"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9","Type":"ContainerStarted","Data":"02f16854f3d5b92c0d592095d3523ea9d212b8c130fcd47db49f5de3be59cd8a"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.721258 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.723896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerStarted","Data":"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.726193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerStarted","Data":"daaf077fed02659f63888557001661141ce10b3a3f0d311633629f984c2eeca2"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.728559 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerStarted","Data":"efd0529432af4e145180a1be0e9365b0e9bc713b77b4887f6046e61868a8ba51"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.731685 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerStarted","Data":"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a"} Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.739280 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" podStartSLOduration=15.739256986000001 podStartE2EDuration="15.739256986s" podCreationTimestamp="2026-02-02 07:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:06.738077084 +0000 UTC m=+180.159280442" watchObservedRunningTime="2026-02-02 07:30:06.739256986 +0000 UTC m=+180.160460334" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.755683 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4cd5n" podStartSLOduration=2.843243578 podStartE2EDuration="29.755663589s" podCreationTimestamp="2026-02-02 07:29:37 +0000 UTC" firstStartedPulling="2026-02-02 07:29:39.332499234 +0000 UTC m=+152.753702582" lastFinishedPulling="2026-02-02 07:30:06.244919245 +0000 UTC m=+179.666122593" observedRunningTime="2026-02-02 07:30:06.753677206 +0000 UTC m=+180.174880544" watchObservedRunningTime="2026-02-02 07:30:06.755663589 +0000 UTC m=+180.176866937" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.791741 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2qjt" podStartSLOduration=2.722725497 podStartE2EDuration="31.791720141s" podCreationTimestamp="2026-02-02 07:29:35 +0000 UTC" firstStartedPulling="2026-02-02 07:29:37.276591368 +0000 UTC m=+150.697794716" lastFinishedPulling="2026-02-02 07:30:06.345586012 +0000 UTC m=+179.766789360" observedRunningTime="2026-02-02 07:30:06.790296983 +0000 UTC m=+180.211500341" watchObservedRunningTime="2026-02-02 07:30:06.791720141 +0000 UTC m=+180.212923499" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.816625 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jmqk6" podStartSLOduration=2.851193889 podStartE2EDuration="29.816608988s" podCreationTimestamp="2026-02-02 07:29:37 +0000 UTC" firstStartedPulling="2026-02-02 07:29:39.348150888 +0000 UTC m=+152.769354236" lastFinishedPulling="2026-02-02 07:30:06.313565987 +0000 UTC m=+179.734769335" observedRunningTime="2026-02-02 07:30:06.814138062 +0000 UTC m=+180.235341420" watchObservedRunningTime="2026-02-02 07:30:06.816608988 +0000 UTC m=+180.237812336" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.836219 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-prb8b" podStartSLOduration=3.49026152 podStartE2EDuration="30.836198705s" podCreationTimestamp="2026-02-02 07:29:36 +0000 UTC" firstStartedPulling="2026-02-02 07:29:38.309478987 +0000 UTC m=+151.730682335" lastFinishedPulling="2026-02-02 07:30:05.655416152 +0000 UTC m=+179.076619520" observedRunningTime="2026-02-02 07:30:06.832136028 +0000 UTC m=+180.253339386" watchObservedRunningTime="2026-02-02 07:30:06.836198705 +0000 UTC m=+180.257402073" Feb 02 07:30:06 crc kubenswrapper[4730]: I0202 07:30:06.990943 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.240999 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.241145 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.651130 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.651422 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.738533 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerStarted","Data":"892e8254732a2818d8a11fa3ef2cc112ed9bcd30321b31cefd32223b0ebb47d6"} Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.740489 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerStarted","Data":"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689"} Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.743527 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerStarted","Data":"162cd3ad0f4adde36a4e08e2923138984a431ee64706d884de79f5a40dfa658e"} Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.760208 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svn5k" podStartSLOduration=3.475739756 podStartE2EDuration="30.760190298s" podCreationTimestamp="2026-02-02 07:29:37 +0000 UTC" firstStartedPulling="2026-02-02 07:29:39.341009999 +0000 UTC m=+152.762213337" lastFinishedPulling="2026-02-02 07:30:06.625460531 +0000 UTC m=+180.046663879" observedRunningTime="2026-02-02 07:30:07.756120601 +0000 UTC m=+181.177323949" watchObservedRunningTime="2026-02-02 07:30:07.760190298 +0000 UTC m=+181.181393646" Feb 02 07:30:07 crc kubenswrapper[4730]: I0202 07:30:07.778579 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xz9bh" podStartSLOduration=4.527956394 podStartE2EDuration="33.778566663s" podCreationTimestamp="2026-02-02 07:29:34 +0000 UTC" firstStartedPulling="2026-02-02 07:29:37.277574604 +0000 UTC m=+150.698777952" lastFinishedPulling="2026-02-02 07:30:06.528184873 +0000 UTC m=+179.949388221" observedRunningTime="2026-02-02 07:30:07.776411556 +0000 UTC m=+181.197614904" watchObservedRunningTime="2026-02-02 07:30:07.778566663 +0000 UTC m=+181.199770011" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.026900 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.026957 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.237406 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.237456 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.551050 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-prb8b" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="registry-server" probeResult="failure" output=< Feb 02 07:30:08 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 02 07:30:08 crc kubenswrapper[4730]: > Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.697691 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jmqk6" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="registry-server" probeResult="failure" output=< Feb 02 07:30:08 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 02 07:30:08 crc kubenswrapper[4730]: > Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.878743 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-555kr" Feb 02 07:30:08 crc kubenswrapper[4730]: I0202 07:30:08.898104 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sk8kx" podStartSLOduration=4.551417463 podStartE2EDuration="34.898087848s" podCreationTimestamp="2026-02-02 07:29:34 +0000 UTC" firstStartedPulling="2026-02-02 07:29:36.204800013 +0000 UTC m=+149.626003361" lastFinishedPulling="2026-02-02 07:30:06.551470398 +0000 UTC m=+179.972673746" observedRunningTime="2026-02-02 07:30:07.798027477 +0000 UTC m=+181.219230825" watchObservedRunningTime="2026-02-02 07:30:08.898087848 +0000 UTC m=+182.319291196" Feb 02 07:30:09 crc kubenswrapper[4730]: I0202 07:30:09.063710 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-svn5k" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="registry-server" probeResult="failure" output=< Feb 02 07:30:09 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 02 07:30:09 crc kubenswrapper[4730]: > Feb 02 07:30:09 crc kubenswrapper[4730]: I0202 07:30:09.286726 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4cd5n" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="registry-server" probeResult="failure" output=< Feb 02 07:30:09 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 02 07:30:09 crc kubenswrapper[4730]: > Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.422739 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.423297 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" containerName="controller-manager" containerID="cri-o://048b46e9a4ff6756d9f821d437af52a46ba3d8f26f5ff3bb6fba252a58aaca29" gracePeriod=30 Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.424199 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.429890 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.518972 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:11 crc kubenswrapper[4730]: I0202 07:30:11.519468 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" podUID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" containerName="route-controller-manager" containerID="cri-o://06beedc014a606dedff648463baec219d5eaf75ed95f1747b47487d2f30a9c8a" gracePeriod=30 Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.584378 4730 patch_prober.go:28] interesting pod/controller-manager-599cc55994-c7bz5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.584450 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.767386 4730 generic.go:334] "Generic (PLEG): container finished" podID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" containerID="06beedc014a606dedff648463baec219d5eaf75ed95f1747b47487d2f30a9c8a" exitCode=0 Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.767426 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" event={"ID":"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9","Type":"ContainerDied","Data":"06beedc014a606dedff648463baec219d5eaf75ed95f1747b47487d2f30a9c8a"} Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.769278 4730 generic.go:334] "Generic (PLEG): container finished" podID="150f7392-de56-4c50-8408-5de0fb5829d0" containerID="048b46e9a4ff6756d9f821d437af52a46ba3d8f26f5ff3bb6fba252a58aaca29" exitCode=0 Feb 02 07:30:12 crc kubenswrapper[4730]: I0202 07:30:12.769307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" event={"ID":"150f7392-de56-4c50-8408-5de0fb5829d0","Type":"ContainerDied","Data":"048b46e9a4ff6756d9f821d437af52a46ba3d8f26f5ff3bb6fba252a58aaca29"} Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.188352 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.193901 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.211616 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:13 crc kubenswrapper[4730]: E0202 07:30:13.211855 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135e45db-9947-4e92-9cfb-1cb0a95e13d7" containerName="collect-profiles" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.211868 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="135e45db-9947-4e92-9cfb-1cb0a95e13d7" containerName="collect-profiles" Feb 02 07:30:13 crc kubenswrapper[4730]: E0202 07:30:13.211879 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" containerName="controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.211886 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" containerName="controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: E0202 07:30:13.211895 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" containerName="route-controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.211901 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" containerName="route-controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.211992 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" containerName="controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.212007 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="135e45db-9947-4e92-9cfb-1cb0a95e13d7" containerName="collect-profiles" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.212015 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" containerName="route-controller-manager" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.212371 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.230118 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.277577 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles\") pod \"150f7392-de56-4c50-8408-5de0fb5829d0\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.277799 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca\") pod \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.277944 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config\") pod \"150f7392-de56-4c50-8408-5de0fb5829d0\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278026 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgj58\" (UniqueName: \"kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58\") pod \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278101 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca\") pod \"150f7392-de56-4c50-8408-5de0fb5829d0\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278186 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24n25\" (UniqueName: \"kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25\") pod \"150f7392-de56-4c50-8408-5de0fb5829d0\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278274 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert\") pod \"150f7392-de56-4c50-8408-5de0fb5829d0\" (UID: \"150f7392-de56-4c50-8408-5de0fb5829d0\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert\") pod \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278443 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config\") pod \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\" (UID: \"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9\") " Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278640 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278683 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" (UID: "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278718 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278755 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "150f7392-de56-4c50-8408-5de0fb5829d0" (UID: "150f7392-de56-4c50-8408-5de0fb5829d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278812 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6jp\" (UniqueName: \"kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278853 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.278998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.279066 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.279082 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.279534 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config" (OuterVolumeSpecName: "config") pod "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" (UID: "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.279755 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config" (OuterVolumeSpecName: "config") pod "150f7392-de56-4c50-8408-5de0fb5829d0" (UID: "150f7392-de56-4c50-8408-5de0fb5829d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.280307 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "150f7392-de56-4c50-8408-5de0fb5829d0" (UID: "150f7392-de56-4c50-8408-5de0fb5829d0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.283280 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" (UID: "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.283383 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25" (OuterVolumeSpecName: "kube-api-access-24n25") pod "150f7392-de56-4c50-8408-5de0fb5829d0" (UID: "150f7392-de56-4c50-8408-5de0fb5829d0"). InnerVolumeSpecName "kube-api-access-24n25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.283507 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "150f7392-de56-4c50-8408-5de0fb5829d0" (UID: "150f7392-de56-4c50-8408-5de0fb5829d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.287405 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58" (OuterVolumeSpecName: "kube-api-access-bgj58") pod "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" (UID: "8dcaabb2-a081-47f9-ad89-a6e4ee57efc9"). InnerVolumeSpecName "kube-api-access-bgj58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.379859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.379933 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.379953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.379980 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw6jp\" (UniqueName: \"kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380005 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380067 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380078 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150f7392-de56-4c50-8408-5de0fb5829d0-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380088 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgj58\" (UniqueName: \"kubernetes.io/projected/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-kube-api-access-bgj58\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380100 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24n25\" (UniqueName: \"kubernetes.io/projected/150f7392-de56-4c50-8408-5de0fb5829d0-kube-api-access-24n25\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380108 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150f7392-de56-4c50-8408-5de0fb5829d0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380117 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.380129 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.381311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.381526 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.381829 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.385102 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.399224 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw6jp\" (UniqueName: \"kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp\") pod \"controller-manager-557f66d8d6-sd9xq\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.525924 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.703290 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:13 crc kubenswrapper[4730]: W0202 07:30:13.708690 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd96b5b4_50b6_4784_9927_8ad03af50b45.slice/crio-00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4 WatchSource:0}: Error finding container 00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4: Status 404 returned error can't find the container with id 00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4 Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.787330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" event={"ID":"8dcaabb2-a081-47f9-ad89-a6e4ee57efc9","Type":"ContainerDied","Data":"02f16854f3d5b92c0d592095d3523ea9d212b8c130fcd47db49f5de3be59cd8a"} Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.787378 4730 scope.go:117] "RemoveContainer" containerID="06beedc014a606dedff648463baec219d5eaf75ed95f1747b47487d2f30a9c8a" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.787496 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.796859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" event={"ID":"150f7392-de56-4c50-8408-5de0fb5829d0","Type":"ContainerDied","Data":"d49f2110426a8f29a0b77a47dacd67b3c73fb4156a933a36e65b7d71c9f6d245"} Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.796935 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599cc55994-c7bz5" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.809411 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" event={"ID":"fd96b5b4-50b6-4784-9927-8ad03af50b45","Type":"ContainerStarted","Data":"00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4"} Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.817242 4730 scope.go:117] "RemoveContainer" containerID="048b46e9a4ff6756d9f821d437af52a46ba3d8f26f5ff3bb6fba252a58aaca29" Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.828042 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.835754 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-599cc55994-c7bz5"] Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.839328 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:13 crc kubenswrapper[4730]: I0202 07:30:13.842242 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fc5cf6f8-b7w2t"] Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.818255 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" event={"ID":"fd96b5b4-50b6-4784-9927-8ad03af50b45","Type":"ContainerStarted","Data":"f17e38821be73a57584c9414bcf297c3bb3618499e8076e7bba7aeef734c4e6d"} Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.818613 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.823917 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.829148 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.829288 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.832083 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" podStartSLOduration=3.832064063 podStartE2EDuration="3.832064063s" podCreationTimestamp="2026-02-02 07:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:14.831882619 +0000 UTC m=+188.253085977" watchObservedRunningTime="2026-02-02 07:30:14.832064063 +0000 UTC m=+188.253267431" Feb 02 07:30:14 crc kubenswrapper[4730]: I0202 07:30:14.916189 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.031643 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.031704 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.072217 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.259088 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150f7392-de56-4c50-8408-5de0fb5829d0" path="/var/lib/kubelet/pods/150f7392-de56-4c50-8408-5de0fb5829d0/volumes" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.259609 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcaabb2-a081-47f9-ad89-a6e4ee57efc9" path="/var/lib/kubelet/pods/8dcaabb2-a081-47f9-ad89-a6e4ee57efc9/volumes" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.318702 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.318747 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.355255 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.477895 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.477955 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.485544 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.535396 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.835682 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.836328 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.842638 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.842951 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.843141 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.843385 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.843732 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.844087 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.845685 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.861381 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.869738 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.869905 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:15 crc kubenswrapper[4730]: I0202 07:30:15.871907 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.012077 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.012155 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.012321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.012559 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgmq\" (UniqueName: \"kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.114274 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgmq\" (UniqueName: \"kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.114412 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.114460 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.114540 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.115675 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.116132 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.121406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.130395 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgmq\" (UniqueName: \"kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq\") pod \"route-controller-manager-5548c9bb94-dpqwg\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.171919 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.546908 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.549879 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.828543 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" event={"ID":"4e9d33f9-30b1-42a3-baa7-11e5edb70279","Type":"ContainerStarted","Data":"b3548735f4d96f7c64d4bfb7968799a771f825aab47231f2264fe8775eafe965"} Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.828883 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" event={"ID":"4e9d33f9-30b1-42a3-baa7-11e5edb70279","Type":"ContainerStarted","Data":"3663ed4ad8d65123f52d886c45b485c4cb26f00bffc013a90fbf4b5af077f4c9"} Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.845715 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" podStartSLOduration=5.845700682 podStartE2EDuration="5.845700682s" podCreationTimestamp="2026-02-02 07:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:16.844124891 +0000 UTC m=+190.265328249" watchObservedRunningTime="2026-02-02 07:30:16.845700682 +0000 UTC m=+190.266904030" Feb 02 07:30:16 crc kubenswrapper[4730]: I0202 07:30:16.972368 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.293640 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.339084 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.396916 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.397602 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.403605 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.403730 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.406613 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.534706 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.534984 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.636057 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.636236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.636384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.658179 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.687255 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.719672 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.731766 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.837864 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xz9bh" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="registry-server" containerID="cri-o://24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689" gracePeriod=2 Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.837847 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.846930 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.956564 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:30:17 crc kubenswrapper[4730]: I0202 07:30:17.957178 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2qjt" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="registry-server" containerID="cri-o://abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a" gracePeriod=2 Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.084529 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.154717 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.166197 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 07:30:18 crc kubenswrapper[4730]: W0202 07:30:18.191940 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod99391d41_6ea5_4ba2_b8b2_f90c59786740.slice/crio-cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26 WatchSource:0}: Error finding container cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26: Status 404 returned error can't find the container with id cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26 Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.298669 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.300473 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.362860 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.456510 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content\") pod \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.456856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities\") pod \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.456902 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm88h\" (UniqueName: \"kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h\") pod \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\" (UID: \"e1a4d2f2-e171-4f3a-b890-976343fdafc5\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.457977 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities" (OuterVolumeSpecName: "utilities") pod "e1a4d2f2-e171-4f3a-b890-976343fdafc5" (UID: "e1a4d2f2-e171-4f3a-b890-976343fdafc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.463376 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h" (OuterVolumeSpecName: "kube-api-access-nm88h") pod "e1a4d2f2-e171-4f3a-b890-976343fdafc5" (UID: "e1a4d2f2-e171-4f3a-b890-976343fdafc5"). InnerVolumeSpecName "kube-api-access-nm88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.481304 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.498372 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a4d2f2-e171-4f3a-b890-976343fdafc5" (UID: "e1a4d2f2-e171-4f3a-b890-976343fdafc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.564209 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.564230 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4d2f2-e171-4f3a-b890-976343fdafc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.564240 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm88h\" (UniqueName: \"kubernetes.io/projected/e1a4d2f2-e171-4f3a-b890-976343fdafc5-kube-api-access-nm88h\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.665192 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6gj\" (UniqueName: \"kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj\") pod \"48b2fc64-25d1-4463-b474-79a9e3aa90db\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.665289 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities\") pod \"48b2fc64-25d1-4463-b474-79a9e3aa90db\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.665323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content\") pod \"48b2fc64-25d1-4463-b474-79a9e3aa90db\" (UID: \"48b2fc64-25d1-4463-b474-79a9e3aa90db\") " Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.666251 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities" (OuterVolumeSpecName: "utilities") pod "48b2fc64-25d1-4463-b474-79a9e3aa90db" (UID: "48b2fc64-25d1-4463-b474-79a9e3aa90db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.668124 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj" (OuterVolumeSpecName: "kube-api-access-wf6gj") pod "48b2fc64-25d1-4463-b474-79a9e3aa90db" (UID: "48b2fc64-25d1-4463-b474-79a9e3aa90db"). InnerVolumeSpecName "kube-api-access-wf6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.719746 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48b2fc64-25d1-4463-b474-79a9e3aa90db" (UID: "48b2fc64-25d1-4463-b474-79a9e3aa90db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.766323 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.766357 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b2fc64-25d1-4463-b474-79a9e3aa90db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.766369 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6gj\" (UniqueName: \"kubernetes.io/projected/48b2fc64-25d1-4463-b474-79a9e3aa90db-kube-api-access-wf6gj\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.842799 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"99391d41-6ea5-4ba2-b8b2-f90c59786740","Type":"ContainerStarted","Data":"e4e6e00fbbcb781922ba1d0d12fb7fd77fbb4d34be84cf518ac11048b8e04aeb"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.843085 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"99391d41-6ea5-4ba2-b8b2-f90c59786740","Type":"ContainerStarted","Data":"cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.845077 4730 generic.go:334] "Generic (PLEG): container finished" podID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerID="abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a" exitCode=0 Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.845273 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerDied","Data":"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.845391 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2qjt" event={"ID":"48b2fc64-25d1-4463-b474-79a9e3aa90db","Type":"ContainerDied","Data":"c56decfe7f5a44fb09b2b76554498d65c9a50bfbf4c93fd82915ec79452f1370"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.845660 4730 scope.go:117] "RemoveContainer" containerID="abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.845833 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2qjt" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.859104 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerID="24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689" exitCode=0 Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.859505 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerDied","Data":"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.859548 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz9bh" event={"ID":"e1a4d2f2-e171-4f3a-b890-976343fdafc5","Type":"ContainerDied","Data":"6ea8801eaa4081325fcac33a0332f8cf9aee9b70e58291eb028312cc0d4fef11"} Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.859609 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz9bh" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.876263 4730 scope.go:117] "RemoveContainer" containerID="4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.896341 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.884661411 podStartE2EDuration="1.884661411s" podCreationTimestamp="2026-02-02 07:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:18.865854184 +0000 UTC m=+192.287057532" watchObservedRunningTime="2026-02-02 07:30:18.884661411 +0000 UTC m=+192.305864789" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.900508 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.904046 4730 scope.go:117] "RemoveContainer" containerID="50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.909962 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2qjt"] Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.918921 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.922174 4730 scope.go:117] "RemoveContainer" containerID="abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.922930 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a\": container with ID starting with abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a not found: ID does not exist" containerID="abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.922975 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a"} err="failed to get container status \"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a\": rpc error: code = NotFound desc = could not find container \"abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a\": container with ID starting with abdf76924a11e4e9a2c1bae75cd257941bf6cb3a1f0d3de3f11ff1f5bffea74a not found: ID does not exist" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.923020 4730 scope.go:117] "RemoveContainer" containerID="4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.923428 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73\": container with ID starting with 4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73 not found: ID does not exist" containerID="4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.923473 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73"} err="failed to get container status \"4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73\": rpc error: code = NotFound desc = could not find container \"4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73\": container with ID starting with 4b315dca739f37bc3e7a6cc92288cf00e231a903318b8a97c7568edd2537de73 not found: ID does not exist" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.923502 4730 scope.go:117] "RemoveContainer" containerID="50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.924495 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921\": container with ID starting with 50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921 not found: ID does not exist" containerID="50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.924523 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921"} err="failed to get container status \"50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921\": rpc error: code = NotFound desc = could not find container \"50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921\": container with ID starting with 50ded1a9e8599d9cc768f051c9b39152c346a4b21538ac6811bfae6878477921 not found: ID does not exist" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.924538 4730 scope.go:117] "RemoveContainer" containerID="24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.926378 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xz9bh"] Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.945056 4730 scope.go:117] "RemoveContainer" containerID="f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.968113 4730 scope.go:117] "RemoveContainer" containerID="7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.986906 4730 scope.go:117] "RemoveContainer" containerID="24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.987530 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689\": container with ID starting with 24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689 not found: ID does not exist" containerID="24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.987571 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689"} err="failed to get container status \"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689\": rpc error: code = NotFound desc = could not find container \"24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689\": container with ID starting with 24915532aa2c14f75f456d3d698ef601376a9aaf03617ddba0dfda656928c689 not found: ID does not exist" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.987598 4730 scope.go:117] "RemoveContainer" containerID="f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.988004 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542\": container with ID starting with f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542 not found: ID does not exist" containerID="f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.988045 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542"} err="failed to get container status \"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542\": rpc error: code = NotFound desc = could not find container \"f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542\": container with ID starting with f75c57f3782231d6703e7d5805d667e742d5f899225d66ea2bc8fb6741510542 not found: ID does not exist" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.988072 4730 scope.go:117] "RemoveContainer" containerID="7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438" Feb 02 07:30:18 crc kubenswrapper[4730]: E0202 07:30:18.988541 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438\": container with ID starting with 7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438 not found: ID does not exist" containerID="7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438" Feb 02 07:30:18 crc kubenswrapper[4730]: I0202 07:30:18.988634 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438"} err="failed to get container status \"7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438\": rpc error: code = NotFound desc = could not find container \"7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438\": container with ID starting with 7932b272a011d0946baca56eddbfddedf700e1109e9764c2399807f13b38b438 not found: ID does not exist" Feb 02 07:30:19 crc kubenswrapper[4730]: I0202 07:30:19.261506 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" path="/var/lib/kubelet/pods/48b2fc64-25d1-4463-b474-79a9e3aa90db/volumes" Feb 02 07:30:19 crc kubenswrapper[4730]: I0202 07:30:19.262512 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" path="/var/lib/kubelet/pods/e1a4d2f2-e171-4f3a-b890-976343fdafc5/volumes" Feb 02 07:30:19 crc kubenswrapper[4730]: I0202 07:30:19.879434 4730 generic.go:334] "Generic (PLEG): container finished" podID="99391d41-6ea5-4ba2-b8b2-f90c59786740" containerID="e4e6e00fbbcb781922ba1d0d12fb7fd77fbb4d34be84cf518ac11048b8e04aeb" exitCode=0 Feb 02 07:30:19 crc kubenswrapper[4730]: I0202 07:30:19.879511 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"99391d41-6ea5-4ba2-b8b2-f90c59786740","Type":"ContainerDied","Data":"e4e6e00fbbcb781922ba1d0d12fb7fd77fbb4d34be84cf518ac11048b8e04aeb"} Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.347475 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.347990 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jmqk6" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="registry-server" containerID="cri-o://d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf" gracePeriod=2 Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.837179 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.888749 4730 generic.go:334] "Generic (PLEG): container finished" podID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerID="d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf" exitCode=0 Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.888814 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerDied","Data":"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf"} Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.888873 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmqk6" event={"ID":"cb188db6-3952-4aa4-a29a-d92911e5f1e1","Type":"ContainerDied","Data":"ca792a526ddafd01b425297df97a85623113cf0ff87710cc55f1d842ba81a64c"} Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.888874 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmqk6" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.888894 4730 scope.go:117] "RemoveContainer" containerID="d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.903535 4730 scope.go:117] "RemoveContainer" containerID="c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.931396 4730 scope.go:117] "RemoveContainer" containerID="38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.992986 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content\") pod \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.993056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xcxh\" (UniqueName: \"kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh\") pod \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.993123 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities\") pod \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\" (UID: \"cb188db6-3952-4aa4-a29a-d92911e5f1e1\") " Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.993731 4730 scope.go:117] "RemoveContainer" containerID="d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.994074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities" (OuterVolumeSpecName: "utilities") pod "cb188db6-3952-4aa4-a29a-d92911e5f1e1" (UID: "cb188db6-3952-4aa4-a29a-d92911e5f1e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:20 crc kubenswrapper[4730]: E0202 07:30:20.994728 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf\": container with ID starting with d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf not found: ID does not exist" containerID="d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.994773 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf"} err="failed to get container status \"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf\": rpc error: code = NotFound desc = could not find container \"d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf\": container with ID starting with d8e3a0e58091a4fb7ace397dcec1b36b192c4dcd9baac6e72ac182e0030b9ecf not found: ID does not exist" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.994803 4730 scope.go:117] "RemoveContainer" containerID="c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d" Feb 02 07:30:20 crc kubenswrapper[4730]: E0202 07:30:20.995231 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d\": container with ID starting with c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d not found: ID does not exist" containerID="c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.995280 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d"} err="failed to get container status \"c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d\": rpc error: code = NotFound desc = could not find container \"c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d\": container with ID starting with c19e1860b12357c770dcedf6bcb23a56f932daf94dbf86efb317e67083d2ce2d not found: ID does not exist" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.995310 4730 scope.go:117] "RemoveContainer" containerID="38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f" Feb 02 07:30:20 crc kubenswrapper[4730]: E0202 07:30:20.995634 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f\": container with ID starting with 38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f not found: ID does not exist" containerID="38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f" Feb 02 07:30:20 crc kubenswrapper[4730]: I0202 07:30:20.995666 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f"} err="failed to get container status \"38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f\": rpc error: code = NotFound desc = could not find container \"38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f\": container with ID starting with 38ec852e4078471bb0001d1cb22e6546024eda3f3f7dcc0dc315db769794399f not found: ID does not exist" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.001286 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh" (OuterVolumeSpecName: "kube-api-access-7xcxh") pod "cb188db6-3952-4aa4-a29a-d92911e5f1e1" (UID: "cb188db6-3952-4aa4-a29a-d92911e5f1e1"). InnerVolumeSpecName "kube-api-access-7xcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.016301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb188db6-3952-4aa4-a29a-d92911e5f1e1" (UID: "cb188db6-3952-4aa4-a29a-d92911e5f1e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.094495 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.094540 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xcxh\" (UniqueName: \"kubernetes.io/projected/cb188db6-3952-4aa4-a29a-d92911e5f1e1-kube-api-access-7xcxh\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.094557 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb188db6-3952-4aa4-a29a-d92911e5f1e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.187996 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.226215 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.228176 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmqk6"] Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.259439 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" path="/var/lib/kubelet/pods/cb188db6-3952-4aa4-a29a-d92911e5f1e1/volumes" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.296440 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir\") pod \"99391d41-6ea5-4ba2-b8b2-f90c59786740\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.296542 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "99391d41-6ea5-4ba2-b8b2-f90c59786740" (UID: "99391d41-6ea5-4ba2-b8b2-f90c59786740"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.296579 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access\") pod \"99391d41-6ea5-4ba2-b8b2-f90c59786740\" (UID: \"99391d41-6ea5-4ba2-b8b2-f90c59786740\") " Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.296788 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99391d41-6ea5-4ba2-b8b2-f90c59786740-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.299405 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "99391d41-6ea5-4ba2-b8b2-f90c59786740" (UID: "99391d41-6ea5-4ba2-b8b2-f90c59786740"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.397496 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99391d41-6ea5-4ba2-b8b2-f90c59786740-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.898056 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"99391d41-6ea5-4ba2-b8b2-f90c59786740","Type":"ContainerDied","Data":"cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26"} Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.898099 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb45edb6062be2ae157dd68ab54ae2b94c2afad70d985d5774880ed5fe0b5c26" Feb 02 07:30:21 crc kubenswrapper[4730]: I0202 07:30:21.898128 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.745950 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.746277 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4cd5n" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="registry-server" containerID="cri-o://daaf077fed02659f63888557001661141ce10b3a3f0d311633629f984c2eeca2" gracePeriod=2 Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.912716 4730 generic.go:334] "Generic (PLEG): container finished" podID="876e37b2-1950-4143-b730-eb121a64a0a8" containerID="daaf077fed02659f63888557001661141ce10b3a3f0d311633629f984c2eeca2" exitCode=0 Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.912778 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerDied","Data":"daaf077fed02659f63888557001661141ce10b3a3f0d311633629f984c2eeca2"} Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.991909 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992257 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992280 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992297 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992309 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992328 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992340 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992356 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992368 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="extract-content" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992390 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992402 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992422 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99391d41-6ea5-4ba2-b8b2-f90c59786740" containerName="pruner" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992434 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99391d41-6ea5-4ba2-b8b2-f90c59786740" containerName="pruner" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992454 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992467 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992483 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992495 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992514 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992527 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: E0202 07:30:22.992548 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992560 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="extract-utilities" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992727 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="99391d41-6ea5-4ba2-b8b2-f90c59786740" containerName="pruner" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992752 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb188db6-3952-4aa4-a29a-d92911e5f1e1" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992773 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a4d2f2-e171-4f3a-b890-976343fdafc5" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.992788 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b2fc64-25d1-4463-b474-79a9e3aa90db" containerName="registry-server" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.993373 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.998045 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 07:30:22 crc kubenswrapper[4730]: I0202 07:30:22.998685 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.002058 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.118746 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.118843 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.118973 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.220467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.220557 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.220599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.220704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.220748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.246786 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access\") pod \"installer-9-crc\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.285947 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.323877 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.423742 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmjn8\" (UniqueName: \"kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8\") pod \"876e37b2-1950-4143-b730-eb121a64a0a8\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.424143 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content\") pod \"876e37b2-1950-4143-b730-eb121a64a0a8\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.424193 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities\") pod \"876e37b2-1950-4143-b730-eb121a64a0a8\" (UID: \"876e37b2-1950-4143-b730-eb121a64a0a8\") " Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.425184 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities" (OuterVolumeSpecName: "utilities") pod "876e37b2-1950-4143-b730-eb121a64a0a8" (UID: "876e37b2-1950-4143-b730-eb121a64a0a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.432354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8" (OuterVolumeSpecName: "kube-api-access-gmjn8") pod "876e37b2-1950-4143-b730-eb121a64a0a8" (UID: "876e37b2-1950-4143-b730-eb121a64a0a8"). InnerVolumeSpecName "kube-api-access-gmjn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.525118 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmjn8\" (UniqueName: \"kubernetes.io/projected/876e37b2-1950-4143-b730-eb121a64a0a8-kube-api-access-gmjn8\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.525142 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.547676 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "876e37b2-1950-4143-b730-eb121a64a0a8" (UID: "876e37b2-1950-4143-b730-eb121a64a0a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.625841 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876e37b2-1950-4143-b730-eb121a64a0a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.733923 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.921212 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cd5n" event={"ID":"876e37b2-1950-4143-b730-eb121a64a0a8","Type":"ContainerDied","Data":"d4b113c0cf6a0a0caa4fe40a023e70a871cc7d28caacbe7ad5755603e5fb6da7"} Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.921267 4730 scope.go:117] "RemoveContainer" containerID="daaf077fed02659f63888557001661141ce10b3a3f0d311633629f984c2eeca2" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.921378 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cd5n" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.932922 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b","Type":"ContainerStarted","Data":"a4525c71f90201cce2c382c05dbdd8c38abf8f55e9db7ea64cb7e891a31ca770"} Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.972919 4730 scope.go:117] "RemoveContainer" containerID="6242978f5820c15980468813c75d8ae8d311528053c971534599782a34c47731" Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.973920 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.978325 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4cd5n"] Feb 02 07:30:23 crc kubenswrapper[4730]: I0202 07:30:23.996333 4730 scope.go:117] "RemoveContainer" containerID="3bc7e0f9a472669cdd38ea228545b8816e82bdb1d8d2bcaf657b559b11cf2209" Feb 02 07:30:24 crc kubenswrapper[4730]: I0202 07:30:24.939186 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b","Type":"ContainerStarted","Data":"f3abc2ccd1447254793679d18d64789358ee1aa7d84ee235ee9283a31be5f7ca"} Feb 02 07:30:24 crc kubenswrapper[4730]: I0202 07:30:24.954261 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.954240048 podStartE2EDuration="2.954240048s" podCreationTimestamp="2026-02-02 07:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:24.950742636 +0000 UTC m=+198.371946004" watchObservedRunningTime="2026-02-02 07:30:24.954240048 +0000 UTC m=+198.375443416" Feb 02 07:30:25 crc kubenswrapper[4730]: I0202 07:30:25.267251 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" path="/var/lib/kubelet/pods/876e37b2-1950-4143-b730-eb121a64a0a8/volumes" Feb 02 07:30:27 crc kubenswrapper[4730]: I0202 07:30:27.660507 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:27 crc kubenswrapper[4730]: I0202 07:30:27.660611 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.426972 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.427671 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" podUID="fd96b5b4-50b6-4784-9927-8ad03af50b45" containerName="controller-manager" containerID="cri-o://f17e38821be73a57584c9414bcf297c3bb3618499e8076e7bba7aeef734c4e6d" gracePeriod=30 Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.443608 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.443994 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" podUID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" containerName="route-controller-manager" containerID="cri-o://b3548735f4d96f7c64d4bfb7968799a771f825aab47231f2264fe8775eafe965" gracePeriod=30 Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.975236 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" containerID="b3548735f4d96f7c64d4bfb7968799a771f825aab47231f2264fe8775eafe965" exitCode=0 Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.975402 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" event={"ID":"4e9d33f9-30b1-42a3-baa7-11e5edb70279","Type":"ContainerDied","Data":"b3548735f4d96f7c64d4bfb7968799a771f825aab47231f2264fe8775eafe965"} Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.975588 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" event={"ID":"4e9d33f9-30b1-42a3-baa7-11e5edb70279","Type":"ContainerDied","Data":"3663ed4ad8d65123f52d886c45b485c4cb26f00bffc013a90fbf4b5af077f4c9"} Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.975608 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3663ed4ad8d65123f52d886c45b485c4cb26f00bffc013a90fbf4b5af077f4c9" Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.977376 4730 generic.go:334] "Generic (PLEG): container finished" podID="fd96b5b4-50b6-4784-9927-8ad03af50b45" containerID="f17e38821be73a57584c9414bcf297c3bb3618499e8076e7bba7aeef734c4e6d" exitCode=0 Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.977407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" event={"ID":"fd96b5b4-50b6-4784-9927-8ad03af50b45","Type":"ContainerDied","Data":"f17e38821be73a57584c9414bcf297c3bb3618499e8076e7bba7aeef734c4e6d"} Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.977428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" event={"ID":"fd96b5b4-50b6-4784-9927-8ad03af50b45","Type":"ContainerDied","Data":"00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4"} Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.977438 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f7a325632099035c54427272cbedf635282c79029362c90c9a70589d15f7f4" Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.980235 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:31 crc kubenswrapper[4730]: I0202 07:30:31.987406 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038657 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca\") pod \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038844 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw6jp\" (UniqueName: \"kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp\") pod \"fd96b5b4-50b6-4784-9927-8ad03af50b45\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038877 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config\") pod \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert\") pod \"fd96b5b4-50b6-4784-9927-8ad03af50b45\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038968 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles\") pod \"fd96b5b4-50b6-4784-9927-8ad03af50b45\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.038994 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgmq\" (UniqueName: \"kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq\") pod \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039014 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert\") pod \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\" (UID: \"4e9d33f9-30b1-42a3-baa7-11e5edb70279\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039040 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config\") pod \"fd96b5b4-50b6-4784-9927-8ad03af50b45\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca\") pod \"fd96b5b4-50b6-4784-9927-8ad03af50b45\" (UID: \"fd96b5b4-50b6-4784-9927-8ad03af50b45\") " Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039419 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e9d33f9-30b1-42a3-baa7-11e5edb70279" (UID: "4e9d33f9-30b1-42a3-baa7-11e5edb70279"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039796 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd96b5b4-50b6-4784-9927-8ad03af50b45" (UID: "fd96b5b4-50b6-4784-9927-8ad03af50b45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.039943 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd96b5b4-50b6-4784-9927-8ad03af50b45" (UID: "fd96b5b4-50b6-4784-9927-8ad03af50b45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.041342 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config" (OuterVolumeSpecName: "config") pod "4e9d33f9-30b1-42a3-baa7-11e5edb70279" (UID: "4e9d33f9-30b1-42a3-baa7-11e5edb70279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.043089 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config" (OuterVolumeSpecName: "config") pod "fd96b5b4-50b6-4784-9927-8ad03af50b45" (UID: "fd96b5b4-50b6-4784-9927-8ad03af50b45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.045628 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e9d33f9-30b1-42a3-baa7-11e5edb70279" (UID: "4e9d33f9-30b1-42a3-baa7-11e5edb70279"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.045802 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp" (OuterVolumeSpecName: "kube-api-access-tw6jp") pod "fd96b5b4-50b6-4784-9927-8ad03af50b45" (UID: "fd96b5b4-50b6-4784-9927-8ad03af50b45"). InnerVolumeSpecName "kube-api-access-tw6jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.045885 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq" (OuterVolumeSpecName: "kube-api-access-rqgmq") pod "4e9d33f9-30b1-42a3-baa7-11e5edb70279" (UID: "4e9d33f9-30b1-42a3-baa7-11e5edb70279"). InnerVolumeSpecName "kube-api-access-rqgmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.045990 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd96b5b4-50b6-4784-9927-8ad03af50b45" (UID: "fd96b5b4-50b6-4784-9927-8ad03af50b45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140134 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140203 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw6jp\" (UniqueName: \"kubernetes.io/projected/fd96b5b4-50b6-4784-9927-8ad03af50b45-kube-api-access-tw6jp\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140214 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9d33f9-30b1-42a3-baa7-11e5edb70279-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140222 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96b5b4-50b6-4784-9927-8ad03af50b45-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140231 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140241 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgmq\" (UniqueName: \"kubernetes.io/projected/4e9d33f9-30b1-42a3-baa7-11e5edb70279-kube-api-access-rqgmq\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140248 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e9d33f9-30b1-42a3-baa7-11e5edb70279-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140258 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.140265 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd96b5b4-50b6-4784-9927-8ad03af50b45-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.849732 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:32 crc kubenswrapper[4730]: E0202 07:30:32.850250 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" containerName="route-controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850265 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" containerName="route-controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: E0202 07:30:32.850280 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="extract-utilities" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850288 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="extract-utilities" Feb 02 07:30:32 crc kubenswrapper[4730]: E0202 07:30:32.850302 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd96b5b4-50b6-4784-9927-8ad03af50b45" containerName="controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850311 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd96b5b4-50b6-4784-9927-8ad03af50b45" containerName="controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: E0202 07:30:32.850321 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="extract-content" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850328 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="extract-content" Feb 02 07:30:32 crc kubenswrapper[4730]: E0202 07:30:32.850340 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="registry-server" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850347 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="registry-server" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850452 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" containerName="route-controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850466 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="876e37b2-1950-4143-b730-eb121a64a0a8" containerName="registry-server" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850480 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd96b5b4-50b6-4784-9927-8ad03af50b45" containerName="controller-manager" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.850858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.852906 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.853614 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.863099 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.866613 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.950895 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.950982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951033 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951070 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951111 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951253 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlzv\" (UniqueName: \"kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.951336 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqxk\" (UniqueName: \"kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.983198 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557f66d8d6-sd9xq" Feb 02 07:30:32 crc kubenswrapper[4730]: I0202 07:30:32.983267 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.006946 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.010242 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5548c9bb94-dpqwg"] Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.018699 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.021835 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-557f66d8d6-sd9xq"] Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053554 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053619 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmlzv\" (UniqueName: \"kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053648 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjqxk\" (UniqueName: \"kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053732 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053797 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.053818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.056571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.057040 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.057091 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.057311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.057633 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.061844 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.062199 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.072436 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmlzv\" (UniqueName: \"kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv\") pod \"controller-manager-c49cd49bf-6mffr\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.073545 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjqxk\" (UniqueName: \"kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk\") pod \"route-controller-manager-658f78c59f-ttvzm\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.184280 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.193376 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.259453 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9d33f9-30b1-42a3-baa7-11e5edb70279" path="/var/lib/kubelet/pods/4e9d33f9-30b1-42a3-baa7-11e5edb70279/volumes" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.260260 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd96b5b4-50b6-4784-9927-8ad03af50b45" path="/var/lib/kubelet/pods/fd96b5b4-50b6-4784-9927-8ad03af50b45/volumes" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.596398 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.672336 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:33 crc kubenswrapper[4730]: W0202 07:30:33.687819 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c19b57d_b602_4920_a582_e2df86bacfc6.slice/crio-1f2476f80a6e7400bfcae6e943246c86fcb57a685991ed4500cc6ff98c9d18dd WatchSource:0}: Error finding container 1f2476f80a6e7400bfcae6e943246c86fcb57a685991ed4500cc6ff98c9d18dd: Status 404 returned error can't find the container with id 1f2476f80a6e7400bfcae6e943246c86fcb57a685991ed4500cc6ff98c9d18dd Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.988979 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" event={"ID":"0c19b57d-b602-4920-a582-e2df86bacfc6","Type":"ContainerStarted","Data":"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728"} Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.989331 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" event={"ID":"0c19b57d-b602-4920-a582-e2df86bacfc6","Type":"ContainerStarted","Data":"1f2476f80a6e7400bfcae6e943246c86fcb57a685991ed4500cc6ff98c9d18dd"} Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.989777 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.991283 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" event={"ID":"dccb55fd-29ad-401e-b6c9-d664f3589c9f","Type":"ContainerStarted","Data":"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7"} Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.991322 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" event={"ID":"dccb55fd-29ad-401e-b6c9-d664f3589c9f","Type":"ContainerStarted","Data":"bde8e08701dc13d79cbe9b59ad5d6fd83f46a17a9e9fc7fa9f9148254498e3f3"} Feb 02 07:30:33 crc kubenswrapper[4730]: I0202 07:30:33.991691 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:34 crc kubenswrapper[4730]: I0202 07:30:34.009351 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:34 crc kubenswrapper[4730]: I0202 07:30:34.041763 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" podStartSLOduration=3.041745854 podStartE2EDuration="3.041745854s" podCreationTimestamp="2026-02-02 07:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:34.04082681 +0000 UTC m=+207.462030158" watchObservedRunningTime="2026-02-02 07:30:34.041745854 +0000 UTC m=+207.462949202" Feb 02 07:30:34 crc kubenswrapper[4730]: I0202 07:30:34.042958 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" podStartSLOduration=3.042951026 podStartE2EDuration="3.042951026s" podCreationTimestamp="2026-02-02 07:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:34.025978868 +0000 UTC m=+207.447182216" watchObservedRunningTime="2026-02-02 07:30:34.042951026 +0000 UTC m=+207.464154374" Feb 02 07:30:34 crc kubenswrapper[4730]: I0202 07:30:34.312319 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:41 crc kubenswrapper[4730]: I0202 07:30:41.998471 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" podUID="7c27212e-7271-4169-9aa7-8b2128167055" containerName="oauth-openshift" containerID="cri-o://b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7" gracePeriod=15 Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.549859 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.572862 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.572934 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.572994 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.573078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.574285 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.574749 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.574863 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575000 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575073 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575077 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575172 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575197 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575221 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575243 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575271 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575289 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir\") pod \"7c27212e-7271-4169-9aa7-8b2128167055\" (UID: \"7c27212e-7271-4169-9aa7-8b2128167055\") " Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575528 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575541 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.575564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.577276 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.578616 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.585419 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.593068 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p" (OuterVolumeSpecName: "kube-api-access-2vb4p") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "kube-api-access-2vb4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.601410 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.603303 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.603922 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.610456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.610842 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.611999 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.612200 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7c27212e-7271-4169-9aa7-8b2128167055" (UID: "7c27212e-7271-4169-9aa7-8b2128167055"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.676936 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.676991 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677014 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677032 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677054 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677073 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677095 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/7c27212e-7271-4169-9aa7-8b2128167055-kube-api-access-2vb4p\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677122 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677144 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c27212e-7271-4169-9aa7-8b2128167055-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677192 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677211 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.677230 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c27212e-7271-4169-9aa7-8b2128167055-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.862015 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:30:42 crc kubenswrapper[4730]: E0202 07:30:42.862507 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c27212e-7271-4169-9aa7-8b2128167055" containerName="oauth-openshift" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.862534 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c27212e-7271-4169-9aa7-8b2128167055" containerName="oauth-openshift" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.862690 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c27212e-7271-4169-9aa7-8b2128167055" containerName="oauth-openshift" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.863371 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.873224 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879524 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879580 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879613 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4d2\" (UniqueName: \"kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879770 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879860 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879894 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.879961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.880024 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.880061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.880116 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.880192 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.981463 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.981569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982119 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982148 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4d2\" (UniqueName: \"kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982194 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982318 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982343 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982368 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.982477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.983383 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.983546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.984347 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.984549 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.984778 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.985205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.985614 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.986184 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.986513 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.986869 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.987640 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.987928 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.988279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:42 crc kubenswrapper[4730]: I0202 07:30:42.998702 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4d2\" (UniqueName: \"kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2\") pod \"oauth-openshift-575cc5b957-q2pwz\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.045899 4730 generic.go:334] "Generic (PLEG): container finished" podID="7c27212e-7271-4169-9aa7-8b2128167055" containerID="b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7" exitCode=0 Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.045964 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" event={"ID":"7c27212e-7271-4169-9aa7-8b2128167055","Type":"ContainerDied","Data":"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7"} Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.045977 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.046008 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4hw4w" event={"ID":"7c27212e-7271-4169-9aa7-8b2128167055","Type":"ContainerDied","Data":"a27023c5fd8c98efc724f97ca3a09f8e74cb10072b49462c7850baa209f0818b"} Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.046037 4730 scope.go:117] "RemoveContainer" containerID="b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.065513 4730 scope.go:117] "RemoveContainer" containerID="b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7" Feb 02 07:30:43 crc kubenswrapper[4730]: E0202 07:30:43.066977 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7\": container with ID starting with b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7 not found: ID does not exist" containerID="b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.067018 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7"} err="failed to get container status \"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7\": rpc error: code = NotFound desc = could not find container \"b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7\": container with ID starting with b8c83b571a376d646906aea7c97dece00918bfe0b6cf4d59315526210bae14e7 not found: ID does not exist" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.081148 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.085061 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4hw4w"] Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.180951 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.264414 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c27212e-7271-4169-9aa7-8b2128167055" path="/var/lib/kubelet/pods/7c27212e-7271-4169-9aa7-8b2128167055/volumes" Feb 02 07:30:43 crc kubenswrapper[4730]: I0202 07:30:43.656793 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:30:43 crc kubenswrapper[4730]: W0202 07:30:43.664063 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba450d82_c017_40a1_a790_50bcc3a8ce20.slice/crio-864c5f76351878dfe6461a7a21e34db033dccd28229ef546dc60451c1ac1ae93 WatchSource:0}: Error finding container 864c5f76351878dfe6461a7a21e34db033dccd28229ef546dc60451c1ac1ae93: Status 404 returned error can't find the container with id 864c5f76351878dfe6461a7a21e34db033dccd28229ef546dc60451c1ac1ae93 Feb 02 07:30:44 crc kubenswrapper[4730]: I0202 07:30:44.055510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" event={"ID":"ba450d82-c017-40a1-a790-50bcc3a8ce20","Type":"ContainerStarted","Data":"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94"} Feb 02 07:30:44 crc kubenswrapper[4730]: I0202 07:30:44.055569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" event={"ID":"ba450d82-c017-40a1-a790-50bcc3a8ce20","Type":"ContainerStarted","Data":"864c5f76351878dfe6461a7a21e34db033dccd28229ef546dc60451c1ac1ae93"} Feb 02 07:30:44 crc kubenswrapper[4730]: I0202 07:30:44.056074 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:44 crc kubenswrapper[4730]: I0202 07:30:44.076908 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" podStartSLOduration=28.076889279 podStartE2EDuration="28.076889279s" podCreationTimestamp="2026-02-02 07:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:44.074931308 +0000 UTC m=+217.496134676" watchObservedRunningTime="2026-02-02 07:30:44.076889279 +0000 UTC m=+217.498092647" Feb 02 07:30:44 crc kubenswrapper[4730]: I0202 07:30:44.648836 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:30:51 crc kubenswrapper[4730]: I0202 07:30:51.436347 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:51 crc kubenswrapper[4730]: I0202 07:30:51.437104 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" podUID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" containerName="controller-manager" containerID="cri-o://6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7" gracePeriod=30 Feb 02 07:30:51 crc kubenswrapper[4730]: I0202 07:30:51.523980 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:51 crc kubenswrapper[4730]: I0202 07:30:51.524201 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" podUID="0c19b57d-b602-4920-a582-e2df86bacfc6" containerName="route-controller-manager" containerID="cri-o://0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728" gracePeriod=30 Feb 02 07:30:51 crc kubenswrapper[4730]: I0202 07:30:51.998902 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.055262 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108187 4730 generic.go:334] "Generic (PLEG): container finished" podID="0c19b57d-b602-4920-a582-e2df86bacfc6" containerID="0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728" exitCode=0 Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108234 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108270 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" event={"ID":"0c19b57d-b602-4920-a582-e2df86bacfc6","Type":"ContainerDied","Data":"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728"} Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108305 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm" event={"ID":"0c19b57d-b602-4920-a582-e2df86bacfc6","Type":"ContainerDied","Data":"1f2476f80a6e7400bfcae6e943246c86fcb57a685991ed4500cc6ff98c9d18dd"} Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108325 4730 scope.go:117] "RemoveContainer" containerID="0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca\") pod \"0c19b57d-b602-4920-a582-e2df86bacfc6\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108713 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config\") pod \"0c19b57d-b602-4920-a582-e2df86bacfc6\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108747 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert\") pod \"0c19b57d-b602-4920-a582-e2df86bacfc6\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.108800 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjqxk\" (UniqueName: \"kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk\") pod \"0c19b57d-b602-4920-a582-e2df86bacfc6\" (UID: \"0c19b57d-b602-4920-a582-e2df86bacfc6\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.109889 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c19b57d-b602-4920-a582-e2df86bacfc6" (UID: "0c19b57d-b602-4920-a582-e2df86bacfc6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.109937 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config" (OuterVolumeSpecName: "config") pod "0c19b57d-b602-4920-a582-e2df86bacfc6" (UID: "0c19b57d-b602-4920-a582-e2df86bacfc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.111220 4730 generic.go:334] "Generic (PLEG): container finished" podID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" containerID="6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7" exitCode=0 Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.111255 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" event={"ID":"dccb55fd-29ad-401e-b6c9-d664f3589c9f","Type":"ContainerDied","Data":"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7"} Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.111280 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" event={"ID":"dccb55fd-29ad-401e-b6c9-d664f3589c9f","Type":"ContainerDied","Data":"bde8e08701dc13d79cbe9b59ad5d6fd83f46a17a9e9fc7fa9f9148254498e3f3"} Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.111292 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49cd49bf-6mffr" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.115455 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk" (OuterVolumeSpecName: "kube-api-access-rjqxk") pod "0c19b57d-b602-4920-a582-e2df86bacfc6" (UID: "0c19b57d-b602-4920-a582-e2df86bacfc6"). InnerVolumeSpecName "kube-api-access-rjqxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.115580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c19b57d-b602-4920-a582-e2df86bacfc6" (UID: "0c19b57d-b602-4920-a582-e2df86bacfc6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.126603 4730 scope.go:117] "RemoveContainer" containerID="0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728" Feb 02 07:30:52 crc kubenswrapper[4730]: E0202 07:30:52.127387 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728\": container with ID starting with 0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728 not found: ID does not exist" containerID="0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.127426 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728"} err="failed to get container status \"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728\": rpc error: code = NotFound desc = could not find container \"0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728\": container with ID starting with 0e0ea84e02befcd6f744e4882a6c1a368137819c17bc11379fa4a124a94f2728 not found: ID does not exist" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.127449 4730 scope.go:117] "RemoveContainer" containerID="6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.145675 4730 scope.go:117] "RemoveContainer" containerID="6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7" Feb 02 07:30:52 crc kubenswrapper[4730]: E0202 07:30:52.146351 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7\": container with ID starting with 6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7 not found: ID does not exist" containerID="6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.146394 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7"} err="failed to get container status \"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7\": rpc error: code = NotFound desc = could not find container \"6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7\": container with ID starting with 6f66d764b712cb3863bb492057a1d6049345ce94b56b27f4cee9ba403c5010f7 not found: ID does not exist" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.209999 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca\") pod \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config\") pod \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert\") pod \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmlzv\" (UniqueName: \"kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv\") pod \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210258 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles\") pod \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\" (UID: \"dccb55fd-29ad-401e-b6c9-d664f3589c9f\") " Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210525 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210548 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c19b57d-b602-4920-a582-e2df86bacfc6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210575 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjqxk\" (UniqueName: \"kubernetes.io/projected/0c19b57d-b602-4920-a582-e2df86bacfc6-kube-api-access-rjqxk\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210587 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c19b57d-b602-4920-a582-e2df86bacfc6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.210923 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca" (OuterVolumeSpecName: "client-ca") pod "dccb55fd-29ad-401e-b6c9-d664f3589c9f" (UID: "dccb55fd-29ad-401e-b6c9-d664f3589c9f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.211037 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dccb55fd-29ad-401e-b6c9-d664f3589c9f" (UID: "dccb55fd-29ad-401e-b6c9-d664f3589c9f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.211074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config" (OuterVolumeSpecName: "config") pod "dccb55fd-29ad-401e-b6c9-d664f3589c9f" (UID: "dccb55fd-29ad-401e-b6c9-d664f3589c9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.214102 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv" (OuterVolumeSpecName: "kube-api-access-zmlzv") pod "dccb55fd-29ad-401e-b6c9-d664f3589c9f" (UID: "dccb55fd-29ad-401e-b6c9-d664f3589c9f"). InnerVolumeSpecName "kube-api-access-zmlzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.214794 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dccb55fd-29ad-401e-b6c9-d664f3589c9f" (UID: "dccb55fd-29ad-401e-b6c9-d664f3589c9f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.312537 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.312597 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccb55fd-29ad-401e-b6c9-d664f3589c9f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.312618 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmlzv\" (UniqueName: \"kubernetes.io/projected/dccb55fd-29ad-401e-b6c9-d664f3589c9f-kube-api-access-zmlzv\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.312638 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.312657 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dccb55fd-29ad-401e-b6c9-d664f3589c9f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.446529 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.456986 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658f78c59f-ttvzm"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.462380 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.467743 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c49cd49bf-6mffr"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.863966 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt"] Feb 02 07:30:52 crc kubenswrapper[4730]: E0202 07:30:52.864837 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" containerName="controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.864870 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" containerName="controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: E0202 07:30:52.864935 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c19b57d-b602-4920-a582-e2df86bacfc6" containerName="route-controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.864950 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c19b57d-b602-4920-a582-e2df86bacfc6" containerName="route-controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.865135 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" containerName="controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.865154 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c19b57d-b602-4920-a582-e2df86bacfc6" containerName="route-controller-manager" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.865977 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.869221 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.869913 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.870568 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.870876 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.870931 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.871555 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.871782 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6494bdf487-wdjbs"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.872465 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.874821 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.875130 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.876125 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.876372 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.876421 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.876487 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.879060 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6494bdf487-wdjbs"] Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.884213 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 07:30:52 crc kubenswrapper[4730]: I0202 07:30:52.887980 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt"] Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.021723 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a1fa7c-15de-49c0-af06-20172be2aede-serving-cert\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.021793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86zc\" (UniqueName: \"kubernetes.io/projected/80a1fa7c-15de-49c0-af06-20172be2aede-kube-api-access-x86zc\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.021950 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69965630-1786-4390-926c-d2786e4e7088-serving-cert\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022016 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-proxy-ca-bundles\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022279 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-client-ca\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022341 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-client-ca\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022495 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-config\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022603 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwfh\" (UniqueName: \"kubernetes.io/projected/69965630-1786-4390-926c-d2786e4e7088-kube-api-access-swwfh\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.022687 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-config\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.123987 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-config\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124076 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a1fa7c-15de-49c0-af06-20172be2aede-serving-cert\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124123 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86zc\" (UniqueName: \"kubernetes.io/projected/80a1fa7c-15de-49c0-af06-20172be2aede-kube-api-access-x86zc\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69965630-1786-4390-926c-d2786e4e7088-serving-cert\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124253 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-proxy-ca-bundles\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-client-ca\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-client-ca\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-config\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.124474 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwfh\" (UniqueName: \"kubernetes.io/projected/69965630-1786-4390-926c-d2786e4e7088-kube-api-access-swwfh\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.125602 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-client-ca\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.126424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-proxy-ca-bundles\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.126488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69965630-1786-4390-926c-d2786e4e7088-config\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.126546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-client-ca\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.128260 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a1fa7c-15de-49c0-af06-20172be2aede-config\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.130234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69965630-1786-4390-926c-d2786e4e7088-serving-cert\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.133143 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a1fa7c-15de-49c0-af06-20172be2aede-serving-cert\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.152571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwfh\" (UniqueName: \"kubernetes.io/projected/69965630-1786-4390-926c-d2786e4e7088-kube-api-access-swwfh\") pod \"controller-manager-6494bdf487-wdjbs\" (UID: \"69965630-1786-4390-926c-d2786e4e7088\") " pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.156077 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86zc\" (UniqueName: \"kubernetes.io/projected/80a1fa7c-15de-49c0-af06-20172be2aede-kube-api-access-x86zc\") pod \"route-controller-manager-7bc58ff47d-5brrt\" (UID: \"80a1fa7c-15de-49c0-af06-20172be2aede\") " pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.206625 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.233062 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.259834 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c19b57d-b602-4920-a582-e2df86bacfc6" path="/var/lib/kubelet/pods/0c19b57d-b602-4920-a582-e2df86bacfc6/volumes" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.260557 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccb55fd-29ad-401e-b6c9-d664f3589c9f" path="/var/lib/kubelet/pods/dccb55fd-29ad-401e-b6c9-d664f3589c9f/volumes" Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.619230 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt"] Feb 02 07:30:53 crc kubenswrapper[4730]: W0202 07:30:53.624076 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80a1fa7c_15de_49c0_af06_20172be2aede.slice/crio-b0da190ad6154b40738abe7268990375b55b164f707d4bbce207d62a434a7d51 WatchSource:0}: Error finding container b0da190ad6154b40738abe7268990375b55b164f707d4bbce207d62a434a7d51: Status 404 returned error can't find the container with id b0da190ad6154b40738abe7268990375b55b164f707d4bbce207d62a434a7d51 Feb 02 07:30:53 crc kubenswrapper[4730]: I0202 07:30:53.679692 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6494bdf487-wdjbs"] Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.129127 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" event={"ID":"69965630-1786-4390-926c-d2786e4e7088","Type":"ContainerStarted","Data":"bc776a2567df15d74439fa33f9a55c47320954075db20eb66f417c869bbd7d21"} Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.129196 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" event={"ID":"69965630-1786-4390-926c-d2786e4e7088","Type":"ContainerStarted","Data":"d7f6895446d4de96324bb2f9cf497858f2af8c84c20d929a2e39d6c0d3c931d5"} Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.129592 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.131401 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" event={"ID":"80a1fa7c-15de-49c0-af06-20172be2aede","Type":"ContainerStarted","Data":"03fed22e96b07e8fb480a9621b3b615061e52e04335cbfc34e94ded71ad07b51"} Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.131470 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" event={"ID":"80a1fa7c-15de-49c0-af06-20172be2aede","Type":"ContainerStarted","Data":"b0da190ad6154b40738abe7268990375b55b164f707d4bbce207d62a434a7d51"} Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.132043 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.142106 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.153678 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6494bdf487-wdjbs" podStartSLOduration=3.153661015 podStartE2EDuration="3.153661015s" podCreationTimestamp="2026-02-02 07:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:54.149879245 +0000 UTC m=+227.571082613" watchObservedRunningTime="2026-02-02 07:30:54.153661015 +0000 UTC m=+227.574864373" Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.204500 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" podStartSLOduration=3.204477695 podStartE2EDuration="3.204477695s" podCreationTimestamp="2026-02-02 07:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:30:54.202442701 +0000 UTC m=+227.623646069" watchObservedRunningTime="2026-02-02 07:30:54.204477695 +0000 UTC m=+227.625681043" Feb 02 07:30:54 crc kubenswrapper[4730]: I0202 07:30:54.315847 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bc58ff47d-5brrt" Feb 02 07:30:57 crc kubenswrapper[4730]: I0202 07:30:57.660290 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:57 crc kubenswrapper[4730]: I0202 07:30:57.660695 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:57 crc kubenswrapper[4730]: I0202 07:30:57.660792 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:30:57 crc kubenswrapper[4730]: I0202 07:30:57.661774 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:30:57 crc kubenswrapper[4730]: I0202 07:30:57.661899 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1" gracePeriod=600 Feb 02 07:30:58 crc kubenswrapper[4730]: I0202 07:30:58.159682 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1" exitCode=0 Feb 02 07:30:58 crc kubenswrapper[4730]: I0202 07:30:58.159746 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1"} Feb 02 07:30:58 crc kubenswrapper[4730]: I0202 07:30:58.159978 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4"} Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.895306 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.896910 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.896952 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897445 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7" gracePeriod=15 Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897600 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897622 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897634 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897667 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897682 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897690 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897701 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897709 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897720 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897754 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897769 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897777 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.897794 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897803 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897847 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6" gracePeriod=15 Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897901 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2" gracePeriod=15 Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897992 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.898009 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.898021 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.898035 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.898044 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.898079 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897879 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f" gracePeriod=15 Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.897974 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6" gracePeriod=15 Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.901041 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 02 07:31:01 crc kubenswrapper[4730]: E0202 07:31:01.938428 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.939858 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.939924 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.939952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.939977 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.940002 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.940084 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.940146 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:01 crc kubenswrapper[4730]: I0202 07:31:01.940231 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.041696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.041847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042055 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042121 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042154 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042127 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042235 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042272 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042315 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042335 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042616 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.042647 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.186290 4730 generic.go:334] "Generic (PLEG): container finished" podID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" containerID="f3abc2ccd1447254793679d18d64789358ee1aa7d84ee235ee9283a31be5f7ca" exitCode=0 Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.186360 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b","Type":"ContainerDied","Data":"f3abc2ccd1447254793679d18d64789358ee1aa7d84ee235ee9283a31be5f7ca"} Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.187011 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.189459 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.190994 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.191838 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6" exitCode=0 Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.191857 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f" exitCode=0 Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.191865 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2" exitCode=0 Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.191871 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6" exitCode=2 Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.191906 4730 scope.go:117] "RemoveContainer" containerID="3a9d1291f223d30787456fc24c382ecd4158c980fe4254add1cd57000238742f" Feb 02 07:31:02 crc kubenswrapper[4730]: I0202 07:31:02.239500 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:02 crc kubenswrapper[4730]: W0202 07:31:02.270308 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e871ce807f286641cc2bb9c64212ca02de60da163e50ca695e192241c7916162 WatchSource:0}: Error finding container e871ce807f286641cc2bb9c64212ca02de60da163e50ca695e192241c7916162: Status 404 returned error can't find the container with id e871ce807f286641cc2bb9c64212ca02de60da163e50ca695e192241c7916162 Feb 02 07:31:02 crc kubenswrapper[4730]: E0202 07:31:02.272719 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905d8104f82c0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,LastTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.203263 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.208794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c"} Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.208870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e871ce807f286641cc2bb9c64212ca02de60da163e50ca695e192241c7916162"} Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.209918 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:03 crc kubenswrapper[4730]: E0202 07:31:03.210687 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.607599 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.608150 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763168 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access\") pod \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763390 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir\") pod \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763436 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock\") pod \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\" (UID: \"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b\") " Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763460 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" (UID: "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763628 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock" (OuterVolumeSpecName: "var-lock") pod "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" (UID: "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.763683 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.768903 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" (UID: "fdbc1489-6cf6-42e9-8d12-3ce8418dc04b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.864645 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:03 crc kubenswrapper[4730]: I0202 07:31:03.864683 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdbc1489-6cf6-42e9-8d12-3ce8418dc04b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:03 crc kubenswrapper[4730]: E0202 07:31:03.936777 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905d8104f82c0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,LastTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.217032 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.217047 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fdbc1489-6cf6-42e9-8d12-3ce8418dc04b","Type":"ContainerDied","Data":"a4525c71f90201cce2c382c05dbdd8c38abf8f55e9db7ea64cb7e891a31ca770"} Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.217099 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4525c71f90201cce2c382c05dbdd8c38abf8f55e9db7ea64cb7e891a31ca770" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.221121 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.221966 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7" exitCode=0 Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.229570 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.671988 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.673268 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.673984 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.674512 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.778823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.778870 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.778892 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.778890 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.778945 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.779050 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.779151 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.779195 4730 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:04 crc kubenswrapper[4730]: I0202 07:31:04.779207 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.230858 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.231652 4730 scope.go:117] "RemoveContainer" containerID="90d8636d904723a6db8d37231b2cbc956ee3ae3288568b3f90696e2ea8fe29a6" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.231739 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.244475 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.244759 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.247330 4730 scope.go:117] "RemoveContainer" containerID="1f2795266001fbb46464e10b5c510e4f8840aafd8b15135bc143f2db22d23b5f" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.263044 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.265654 4730 scope.go:117] "RemoveContainer" containerID="a3b6f04e70ce289e8364ef0d41e999dbddd7e9ff42092c24304a1c723921c9e2" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.284380 4730 scope.go:117] "RemoveContainer" containerID="3a50009a4e814688f206dedc849c05cf0081e94fc9890aa1c4522ae0884a0eb6" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.299272 4730 scope.go:117] "RemoveContainer" containerID="9a2ef80ef8f200d3d1b1aaa908d106c948439b404f6ef08961400511d3c679e7" Feb 02 07:31:05 crc kubenswrapper[4730]: I0202 07:31:05.322609 4730 scope.go:117] "RemoveContainer" containerID="5ea06a93fa11ec8edf64a64382386adda1849f924f811f90b9ac10a5b8e89c0a" Feb 02 07:31:07 crc kubenswrapper[4730]: I0202 07:31:07.254478 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.412168 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.412406 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.412637 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.412871 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.413190 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:07 crc kubenswrapper[4730]: I0202 07:31:07.413223 4730 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.413455 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Feb 02 07:31:07 crc kubenswrapper[4730]: E0202 07:31:07.614833 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Feb 02 07:31:08 crc kubenswrapper[4730]: E0202 07:31:08.015900 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Feb 02 07:31:08 crc kubenswrapper[4730]: E0202 07:31:08.817176 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Feb 02 07:31:10 crc kubenswrapper[4730]: E0202 07:31:10.418113 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Feb 02 07:31:13 crc kubenswrapper[4730]: I0202 07:31:13.252984 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:13 crc kubenswrapper[4730]: I0202 07:31:13.254910 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:13 crc kubenswrapper[4730]: I0202 07:31:13.277801 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:13 crc kubenswrapper[4730]: I0202 07:31:13.277852 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:13 crc kubenswrapper[4730]: E0202 07:31:13.278570 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:13 crc kubenswrapper[4730]: I0202 07:31:13.279115 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:13 crc kubenswrapper[4730]: E0202 07:31:13.620039 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="6.4s" Feb 02 07:31:13 crc kubenswrapper[4730]: E0202 07:31:13.938505 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905d8104f82c0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,LastTimestamp:2026-02-02 07:31:02.272142348 +0000 UTC m=+235.693345736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.280405 4730 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="673e8748b8fa956b535f1af12066cc845980e5ec803182ea6220cfaff2d82698" exitCode=0 Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.280456 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"673e8748b8fa956b535f1af12066cc845980e5ec803182ea6220cfaff2d82698"} Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.280485 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6b76c34024bfe695c2336b69105d289fdf3f68d89440cfedacf3ec51bd99ab92"} Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.280770 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.280786 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:14 crc kubenswrapper[4730]: I0202 07:31:14.282598 4730 status_manager.go:851] "Failed to get status for pod" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Feb 02 07:31:14 crc kubenswrapper[4730]: E0202 07:31:14.282619 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:15 crc kubenswrapper[4730]: I0202 07:31:15.294429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5931973e4f97b24b70d4c2f03aa3a9129b8626585b0b6828bc037c7820acee7"} Feb 02 07:31:15 crc kubenswrapper[4730]: I0202 07:31:15.294892 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"00f00ceca58f9eac48f3162cd1736dfa235e4bfe1792754a18949722b17e1c09"} Feb 02 07:31:15 crc kubenswrapper[4730]: I0202 07:31:15.294903 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54ede9bc2f57c9808c08c45a9d4eeb2863a5408fc8c66262ef8cac1652adf212"} Feb 02 07:31:16 crc kubenswrapper[4730]: I0202 07:31:16.301641 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2eeaa180a3f9dc506f773792e08c3928fd896ca55142940aaad6f8dca05ce0d2"} Feb 02 07:31:16 crc kubenswrapper[4730]: I0202 07:31:16.301914 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7943b2e9f89e997cec8f8c9dc36c67cd2153bd5f9da87909b0de97133ab4e7be"} Feb 02 07:31:16 crc kubenswrapper[4730]: I0202 07:31:16.301934 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:16 crc kubenswrapper[4730]: I0202 07:31:16.301866 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:16 crc kubenswrapper[4730]: I0202 07:31:16.301957 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:17 crc kubenswrapper[4730]: I0202 07:31:17.309078 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 07:31:17 crc kubenswrapper[4730]: I0202 07:31:17.309145 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26" exitCode=1 Feb 02 07:31:17 crc kubenswrapper[4730]: I0202 07:31:17.309204 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26"} Feb 02 07:31:17 crc kubenswrapper[4730]: I0202 07:31:17.309758 4730 scope.go:117] "RemoveContainer" containerID="8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26" Feb 02 07:31:18 crc kubenswrapper[4730]: I0202 07:31:18.279564 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:18 crc kubenswrapper[4730]: I0202 07:31:18.279861 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:18 crc kubenswrapper[4730]: I0202 07:31:18.285358 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:18 crc kubenswrapper[4730]: I0202 07:31:18.317891 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 07:31:18 crc kubenswrapper[4730]: I0202 07:31:18.317949 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1578f37584f251605889c87e5dee519a9c9dda0b274e4e5261a248e74b56dcd1"} Feb 02 07:31:21 crc kubenswrapper[4730]: I0202 07:31:21.314800 4730 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:21 crc kubenswrapper[4730]: I0202 07:31:21.336491 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:21 crc kubenswrapper[4730]: I0202 07:31:21.336518 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:21 crc kubenswrapper[4730]: I0202 07:31:21.338835 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:21 crc kubenswrapper[4730]: I0202 07:31:21.339415 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e9b28912-27d1-4d52-a900-552e270cdc6d" Feb 02 07:31:22 crc kubenswrapper[4730]: I0202 07:31:22.342798 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:22 crc kubenswrapper[4730]: I0202 07:31:22.342850 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6bc53a5-2daa-4979-9368-c76de4c468e3" Feb 02 07:31:22 crc kubenswrapper[4730]: I0202 07:31:22.350062 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e9b28912-27d1-4d52-a900-552e270cdc6d" Feb 02 07:31:22 crc kubenswrapper[4730]: I0202 07:31:22.368297 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:31:26 crc kubenswrapper[4730]: I0202 07:31:26.776824 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:31:26 crc kubenswrapper[4730]: I0202 07:31:26.777301 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 07:31:26 crc kubenswrapper[4730]: I0202 07:31:26.777372 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 07:31:31 crc kubenswrapper[4730]: I0202 07:31:31.221574 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 07:31:31 crc kubenswrapper[4730]: I0202 07:31:31.225466 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 07:31:32 crc kubenswrapper[4730]: I0202 07:31:32.057570 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 07:31:32 crc kubenswrapper[4730]: I0202 07:31:32.091488 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.391970 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.586736 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.589924 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.802236 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.869633 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 07:31:33 crc kubenswrapper[4730]: I0202 07:31:33.873334 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.007496 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.108747 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.186474 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.280962 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.393346 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.401064 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.409119 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.409225 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.417140 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.438371 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.43834006 podStartE2EDuration="13.43834006s" podCreationTimestamp="2026-02-02 07:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:31:34.432389841 +0000 UTC m=+267.853593189" watchObservedRunningTime="2026-02-02 07:31:34.43834006 +0000 UTC m=+267.859543418" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.442879 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.519406 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.666362 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.683465 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 07:31:34 crc kubenswrapper[4730]: I0202 07:31:34.820314 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.235858 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.268079 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.296024 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.359875 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.429801 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.441989 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.485655 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.554679 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.926342 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 07:31:35 crc kubenswrapper[4730]: I0202 07:31:35.960027 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.104029 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.118849 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.194765 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.245248 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.312546 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.346894 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.367549 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.380503 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.401401 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.409316 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.458873 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.537007 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.564475 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.686553 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.743451 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.777004 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.777084 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.791886 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.835973 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.869689 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.933794 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 07:31:36 crc kubenswrapper[4730]: I0202 07:31:36.998843 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.040348 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.043284 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.149305 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.253980 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.418515 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.461663 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.531310 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.798987 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.897271 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.897884 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.941670 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.974570 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 07:31:37 crc kubenswrapper[4730]: I0202 07:31:37.976799 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.003778 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.009842 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.067961 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.113488 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.282610 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.315936 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.427917 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.470373 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.512127 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.513628 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.543683 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.549955 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.638437 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.858841 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.863086 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.884652 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 07:31:38 crc kubenswrapper[4730]: I0202 07:31:38.978518 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.061852 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.084401 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.115993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.123843 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.225895 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.231924 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.385465 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.389822 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.445818 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.453703 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.532395 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.564871 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.566319 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.737385 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.741933 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.781013 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.859675 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.901368 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 07:31:39 crc kubenswrapper[4730]: I0202 07:31:39.946805 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.032544 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.241441 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.329665 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.335263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.337041 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.410943 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.414963 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.436993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.443983 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.490427 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.508024 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.519992 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.550336 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.553474 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.562436 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.617511 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.678378 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.691778 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.843001 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.879250 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.885091 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 07:31:40 crc kubenswrapper[4730]: I0202 07:31:40.926907 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.045790 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.058891 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.070525 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.120505 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.135998 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.170428 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.260443 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.272201 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.370586 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.379895 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.387156 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.387699 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.453972 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.540938 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.553914 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.588900 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.632554 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.770958 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.840345 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.943742 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 07:31:41 crc kubenswrapper[4730]: I0202 07:31:41.982229 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.020546 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.042492 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.042548 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.047487 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.057634 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.136985 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.161680 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.199500 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.219314 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.316393 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.440017 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.456934 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.470790 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.594225 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.843364 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.853894 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.898991 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.930534 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.978158 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.983797 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 07:31:42 crc kubenswrapper[4730]: I0202 07:31:42.999720 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.054854 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.100036 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.114461 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.177802 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.240498 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.265861 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.315414 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.348242 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.396049 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.497904 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.708477 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.737319 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.741508 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.744032 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.752448 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.752846 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c" gracePeriod=5 Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.759925 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.786762 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.808349 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.825642 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 07:31:43 crc kubenswrapper[4730]: I0202 07:31:43.861430 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.001057 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.005776 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.097582 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.135703 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.151044 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.155731 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.164813 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.252290 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.338913 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.604331 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.613681 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.667061 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.845465 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.859307 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.864910 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 07:31:44 crc kubenswrapper[4730]: I0202 07:31:44.930834 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.022923 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.090942 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.102141 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.102378 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sk8kx" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="registry-server" containerID="cri-o://162cd3ad0f4adde36a4e08e2923138984a431ee64706d884de79f5a40dfa658e" gracePeriod=30 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.105783 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.114213 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.114660 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tpd9l" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="registry-server" containerID="cri-o://15ee120a70ddfa2e03c90f7e104a948a13e4338fea2d8b1f05bc2aaad1a66305" gracePeriod=30 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.121438 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.121619 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" containerID="cri-o://f88d7df3224c90072b79a0e921bf5dc6668373fa2e328bcd4f92d4ac1152d8fa" gracePeriod=30 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.128748 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.135544 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.135843 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-prb8b" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="registry-server" containerID="cri-o://efd0529432af4e145180a1be0e9365b0e9bc713b77b4887f6046e61868a8ba51" gracePeriod=30 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.142355 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.142640 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-svn5k" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="registry-server" containerID="cri-o://892e8254732a2818d8a11fa3ef2cc112ed9bcd30321b31cefd32223b0ebb47d6" gracePeriod=30 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.194391 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.259902 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.277766 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.292100 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.315861 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.363587 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.403358 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.467027 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.479640 4730 generic.go:334] "Generic (PLEG): container finished" podID="17c5db75-0318-476c-aab3-8ddab8adb360" containerID="892e8254732a2818d8a11fa3ef2cc112ed9bcd30321b31cefd32223b0ebb47d6" exitCode=0 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.479710 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerDied","Data":"892e8254732a2818d8a11fa3ef2cc112ed9bcd30321b31cefd32223b0ebb47d6"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.482641 4730 generic.go:334] "Generic (PLEG): container finished" podID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerID="162cd3ad0f4adde36a4e08e2923138984a431ee64706d884de79f5a40dfa658e" exitCode=0 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.482687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerDied","Data":"162cd3ad0f4adde36a4e08e2923138984a431ee64706d884de79f5a40dfa658e"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.496626 4730 generic.go:334] "Generic (PLEG): container finished" podID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerID="15ee120a70ddfa2e03c90f7e104a948a13e4338fea2d8b1f05bc2aaad1a66305" exitCode=0 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.496674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerDied","Data":"15ee120a70ddfa2e03c90f7e104a948a13e4338fea2d8b1f05bc2aaad1a66305"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.496741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpd9l" event={"ID":"10e61c85-454b-47fa-8827-5a1de18dcfdf","Type":"ContainerDied","Data":"7a41ca262f5659554a1541527f74a5170ef089890f42b489ac0c93d2819eb0e4"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.496759 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a41ca262f5659554a1541527f74a5170ef089890f42b489ac0c93d2819eb0e4" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.499144 4730 generic.go:334] "Generic (PLEG): container finished" podID="cac6c492-5297-4467-b15b-d211bd932d9e" containerID="efd0529432af4e145180a1be0e9365b0e9bc713b77b4887f6046e61868a8ba51" exitCode=0 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.499187 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerDied","Data":"efd0529432af4e145180a1be0e9365b0e9bc713b77b4887f6046e61868a8ba51"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.500408 4730 generic.go:334] "Generic (PLEG): container finished" podID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerID="f88d7df3224c90072b79a0e921bf5dc6668373fa2e328bcd4f92d4ac1152d8fa" exitCode=0 Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.500446 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" event={"ID":"de37b790-96db-42d1-8a4c-826e0a88bd97","Type":"ContainerDied","Data":"f88d7df3224c90072b79a0e921bf5dc6668373fa2e328bcd4f92d4ac1152d8fa"} Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.518306 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.563491 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.563870 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.580823 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.595652 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.683771 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.686727 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.712085 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.715902 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.735029 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.744142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvqst\" (UniqueName: \"kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst\") pod \"10e61c85-454b-47fa-8827-5a1de18dcfdf\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.744248 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content\") pod \"10e61c85-454b-47fa-8827-5a1de18dcfdf\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.744300 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities\") pod \"10e61c85-454b-47fa-8827-5a1de18dcfdf\" (UID: \"10e61c85-454b-47fa-8827-5a1de18dcfdf\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.745140 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities" (OuterVolumeSpecName: "utilities") pod "10e61c85-454b-47fa-8827-5a1de18dcfdf" (UID: "10e61c85-454b-47fa-8827-5a1de18dcfdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.749832 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst" (OuterVolumeSpecName: "kube-api-access-qvqst") pod "10e61c85-454b-47fa-8827-5a1de18dcfdf" (UID: "10e61c85-454b-47fa-8827-5a1de18dcfdf"). InnerVolumeSpecName "kube-api-access-qvqst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.762301 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.799192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10e61c85-454b-47fa-8827-5a1de18dcfdf" (UID: "10e61c85-454b-47fa-8827-5a1de18dcfdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content\") pod \"5534cdef-89e0-4d1c-b5b6-24a739696063\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845567 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content\") pod \"17c5db75-0318-476c-aab3-8ddab8adb360\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845599 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities\") pod \"17c5db75-0318-476c-aab3-8ddab8adb360\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbn62\" (UniqueName: \"kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62\") pod \"5534cdef-89e0-4d1c-b5b6-24a739696063\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845660 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkkqj\" (UniqueName: \"kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj\") pod \"de37b790-96db-42d1-8a4c-826e0a88bd97\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845687 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities\") pod \"cac6c492-5297-4467-b15b-d211bd932d9e\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845711 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics\") pod \"de37b790-96db-42d1-8a4c-826e0a88bd97\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845729 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca\") pod \"de37b790-96db-42d1-8a4c-826e0a88bd97\" (UID: \"de37b790-96db-42d1-8a4c-826e0a88bd97\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845766 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvfqd\" (UniqueName: \"kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd\") pod \"17c5db75-0318-476c-aab3-8ddab8adb360\" (UID: \"17c5db75-0318-476c-aab3-8ddab8adb360\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845789 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities\") pod \"5534cdef-89e0-4d1c-b5b6-24a739696063\" (UID: \"5534cdef-89e0-4d1c-b5b6-24a739696063\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845814 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sng6r\" (UniqueName: \"kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r\") pod \"cac6c492-5297-4467-b15b-d211bd932d9e\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.845832 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content\") pod \"cac6c492-5297-4467-b15b-d211bd932d9e\" (UID: \"cac6c492-5297-4467-b15b-d211bd932d9e\") " Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.846004 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvqst\" (UniqueName: \"kubernetes.io/projected/10e61c85-454b-47fa-8827-5a1de18dcfdf-kube-api-access-qvqst\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.846015 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.846023 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10e61c85-454b-47fa-8827-5a1de18dcfdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.846434 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities" (OuterVolumeSpecName: "utilities") pod "17c5db75-0318-476c-aab3-8ddab8adb360" (UID: "17c5db75-0318-476c-aab3-8ddab8adb360"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.846934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "de37b790-96db-42d1-8a4c-826e0a88bd97" (UID: "de37b790-96db-42d1-8a4c-826e0a88bd97"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.847155 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities" (OuterVolumeSpecName: "utilities") pod "cac6c492-5297-4467-b15b-d211bd932d9e" (UID: "cac6c492-5297-4467-b15b-d211bd932d9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.847724 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities" (OuterVolumeSpecName: "utilities") pod "5534cdef-89e0-4d1c-b5b6-24a739696063" (UID: "5534cdef-89e0-4d1c-b5b6-24a739696063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.849983 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r" (OuterVolumeSpecName: "kube-api-access-sng6r") pod "cac6c492-5297-4467-b15b-d211bd932d9e" (UID: "cac6c492-5297-4467-b15b-d211bd932d9e"). InnerVolumeSpecName "kube-api-access-sng6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.850816 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "de37b790-96db-42d1-8a4c-826e0a88bd97" (UID: "de37b790-96db-42d1-8a4c-826e0a88bd97"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.851001 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.851175 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd" (OuterVolumeSpecName: "kube-api-access-kvfqd") pod "17c5db75-0318-476c-aab3-8ddab8adb360" (UID: "17c5db75-0318-476c-aab3-8ddab8adb360"). InnerVolumeSpecName "kube-api-access-kvfqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.852128 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62" (OuterVolumeSpecName: "kube-api-access-pbn62") pod "5534cdef-89e0-4d1c-b5b6-24a739696063" (UID: "5534cdef-89e0-4d1c-b5b6-24a739696063"). InnerVolumeSpecName "kube-api-access-pbn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.855662 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj" (OuterVolumeSpecName: "kube-api-access-fkkqj") pod "de37b790-96db-42d1-8a4c-826e0a88bd97" (UID: "de37b790-96db-42d1-8a4c-826e0a88bd97"). InnerVolumeSpecName "kube-api-access-fkkqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.868252 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cac6c492-5297-4467-b15b-d211bd932d9e" (UID: "cac6c492-5297-4467-b15b-d211bd932d9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.910606 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5534cdef-89e0-4d1c-b5b6-24a739696063" (UID: "5534cdef-89e0-4d1c-b5b6-24a739696063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947121 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbn62\" (UniqueName: \"kubernetes.io/projected/5534cdef-89e0-4d1c-b5b6-24a739696063-kube-api-access-pbn62\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947248 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkkqj\" (UniqueName: \"kubernetes.io/projected/de37b790-96db-42d1-8a4c-826e0a88bd97-kube-api-access-fkkqj\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947282 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947302 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947320 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de37b790-96db-42d1-8a4c-826e0a88bd97-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947338 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvfqd\" (UniqueName: \"kubernetes.io/projected/17c5db75-0318-476c-aab3-8ddab8adb360-kube-api-access-kvfqd\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947355 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947372 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sng6r\" (UniqueName: \"kubernetes.io/projected/cac6c492-5297-4467-b15b-d211bd932d9e-kube-api-access-sng6r\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947389 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac6c492-5297-4467-b15b-d211bd932d9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947405 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5534cdef-89e0-4d1c-b5b6-24a739696063-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.947420 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:45 crc kubenswrapper[4730]: I0202 07:31:45.964588 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17c5db75-0318-476c-aab3-8ddab8adb360" (UID: "17c5db75-0318-476c-aab3-8ddab8adb360"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.049099 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c5db75-0318-476c-aab3-8ddab8adb360-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.084361 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.272260 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.508621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk8kx" event={"ID":"5534cdef-89e0-4d1c-b5b6-24a739696063","Type":"ContainerDied","Data":"cc8786193182394ec3456353e4461d10f85c687a180d8ac5b5621d5291746ede"} Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.508651 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk8kx" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.508696 4730 scope.go:117] "RemoveContainer" containerID="162cd3ad0f4adde36a4e08e2923138984a431ee64706d884de79f5a40dfa658e" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.510872 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prb8b" event={"ID":"cac6c492-5297-4467-b15b-d211bd932d9e","Type":"ContainerDied","Data":"7baba3f602960c15d3d1b1b3a4c6479032f1f090c2bd7f769d71fe8bd67b01e1"} Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.510905 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prb8b" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.512792 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" event={"ID":"de37b790-96db-42d1-8a4c-826e0a88bd97","Type":"ContainerDied","Data":"d1167cf7c5c001ec91ccd3979de29ca0943092922fdd4c53e445f1f04658f54f"} Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.512899 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hs55q" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.517422 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpd9l" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.530373 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svn5k" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.534845 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svn5k" event={"ID":"17c5db75-0318-476c-aab3-8ddab8adb360","Type":"ContainerDied","Data":"0cf79a1f105537f6d0d490dbc6b2c739c84d2dd20ce724d1b044fa73278e8108"} Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.542678 4730 scope.go:117] "RemoveContainer" containerID="d7234c42790032f1de843e54745190e9e4c277ee023d76d4bf3f3bd206f96a5d" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.599391 4730 scope.go:117] "RemoveContainer" containerID="b489e4d3882d746b4324fbdf0cc006b88f97ccd18650e2f13505f50e4434695b" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.601538 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.608049 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sk8kx"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.619260 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.621289 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-svn5k"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.629980 4730 scope.go:117] "RemoveContainer" containerID="efd0529432af4e145180a1be0e9365b0e9bc713b77b4887f6046e61868a8ba51" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.635084 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.639098 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hs55q"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.646280 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.668737 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-prb8b"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.669029 4730 scope.go:117] "RemoveContainer" containerID="6b885215366b21e6eea33e5302a1e2c281b8145c8b57c8da59008d29bfa6321b" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.677596 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.682631 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tpd9l"] Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.686096 4730 scope.go:117] "RemoveContainer" containerID="27d4de6af5a44a04f26174361457db46414a6e2ff3ff114b386987d51da8449a" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.712980 4730 scope.go:117] "RemoveContainer" containerID="f88d7df3224c90072b79a0e921bf5dc6668373fa2e328bcd4f92d4ac1152d8fa" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.733877 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.736569 4730 scope.go:117] "RemoveContainer" containerID="892e8254732a2818d8a11fa3ef2cc112ed9bcd30321b31cefd32223b0ebb47d6" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.749211 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.750955 4730 scope.go:117] "RemoveContainer" containerID="d3a3b6025c7ae749f4306e3c70d40c7907bf6ae524d483bfae99c483c694dbcd" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.764618 4730 scope.go:117] "RemoveContainer" containerID="273b11316d7461962f673134b16652126db15cd4717b3a966c5418c7e86bb713" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.777970 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.778057 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.778140 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.778937 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"1578f37584f251605889c87e5dee519a9c9dda0b274e4e5261a248e74b56dcd1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.779094 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://1578f37584f251605889c87e5dee519a9c9dda0b274e4e5261a248e74b56dcd1" gracePeriod=30 Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.781291 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.851879 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.874196 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.884036 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.885811 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 07:31:46 crc kubenswrapper[4730]: I0202 07:31:46.994744 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.004415 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.051569 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.078391 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.099978 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.263402 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" path="/var/lib/kubelet/pods/10e61c85-454b-47fa-8827-5a1de18dcfdf/volumes" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.264717 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" path="/var/lib/kubelet/pods/17c5db75-0318-476c-aab3-8ddab8adb360/volumes" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.266097 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" path="/var/lib/kubelet/pods/5534cdef-89e0-4d1c-b5b6-24a739696063/volumes" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.268062 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" path="/var/lib/kubelet/pods/cac6c492-5297-4467-b15b-d211bd932d9e/volumes" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.269286 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" path="/var/lib/kubelet/pods/de37b790-96db-42d1-8a4c-826e0a88bd97/volumes" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.324027 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.377889 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.436356 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.544002 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.693020 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.693662 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.834798 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 07:31:47 crc kubenswrapper[4730]: I0202 07:31:47.871555 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.074434 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.269431 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.384764 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.396389 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.438698 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.516931 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.549636 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.887369 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.887433 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.996739 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.996802 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.996880 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997108 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997496 4730 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997563 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997615 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:48 crc kubenswrapper[4730]: I0202 07:31:48.997651 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.003141 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.098480 4730 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.098531 4730 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.098550 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.098567 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.133109 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.276958 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.378280 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.503154 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.539731 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.539785 4730 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c" exitCode=137 Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.539826 4730 scope.go:117] "RemoveContainer" containerID="f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.539847 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.562323 4730 scope.go:117] "RemoveContainer" containerID="f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c" Feb 02 07:31:49 crc kubenswrapper[4730]: E0202 07:31:49.563175 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c\": container with ID starting with f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c not found: ID does not exist" containerID="f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c" Feb 02 07:31:49 crc kubenswrapper[4730]: I0202 07:31:49.563215 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c"} err="failed to get container status \"f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c\": rpc error: code = NotFound desc = could not find container \"f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c\": container with ID starting with f87c3f51c55e772afb7622a21d9968f737ecfd95059da5e3fc507f6dda00324c not found: ID does not exist" Feb 02 07:31:51 crc kubenswrapper[4730]: I0202 07:31:51.169029 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 07:32:07 crc kubenswrapper[4730]: I0202 07:32:07.026053 4730 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625180 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gsgln"] Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625881 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" containerName="installer" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625895 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" containerName="installer" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625911 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625918 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625930 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625937 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625947 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625954 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625963 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625970 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625978 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625984 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.625993 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.625999 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626009 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626015 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626024 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626032 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626041 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626047 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626055 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626062 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626070 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626077 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="extract-content" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626086 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626093 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626104 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626111 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: E0202 07:32:10.626120 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626127 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="extract-utilities" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626259 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5db75-0318-476c-aab3-8ddab8adb360" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626271 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626284 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e61c85-454b-47fa-8827-5a1de18dcfdf" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626294 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbc1489-6cf6-42e9-8d12-3ce8418dc04b" containerName="installer" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626304 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="de37b790-96db-42d1-8a4c-826e0a88bd97" containerName="marketplace-operator" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626312 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac6c492-5297-4467-b15b-d211bd932d9e" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626319 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5534cdef-89e0-4d1c-b5b6-24a739696063" containerName="registry-server" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.626725 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.628892 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.629185 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.629840 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.630033 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.637831 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gsgln"] Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.639440 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.752150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.752232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpkt\" (UniqueName: \"kubernetes.io/projected/0f381f8d-949e-41ee-a20e-31189f5630f1-kube-api-access-9kpkt\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.752316 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.853312 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpkt\" (UniqueName: \"kubernetes.io/projected/0f381f8d-949e-41ee-a20e-31189f5630f1-kube-api-access-9kpkt\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.853446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.853486 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.854698 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.870650 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f381f8d-949e-41ee-a20e-31189f5630f1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.877056 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpkt\" (UniqueName: \"kubernetes.io/projected/0f381f8d-949e-41ee-a20e-31189f5630f1-kube-api-access-9kpkt\") pod \"marketplace-operator-79b997595-gsgln\" (UID: \"0f381f8d-949e-41ee-a20e-31189f5630f1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:10 crc kubenswrapper[4730]: I0202 07:32:10.942387 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.350379 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gsgln"] Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.652135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" event={"ID":"0f381f8d-949e-41ee-a20e-31189f5630f1","Type":"ContainerStarted","Data":"4eb38c6747aca731d8b803c7c3a085d066c8bc08718983cfcd33a78e412ff4ef"} Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.652429 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.652439 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" event={"ID":"0f381f8d-949e-41ee-a20e-31189f5630f1","Type":"ContainerStarted","Data":"a11e41165c4ec2eed98d663890b0522cc237f2d1158bc8118fe5e91d04955912"} Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.653845 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gsgln container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.653940 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" podUID="0f381f8d-949e-41ee-a20e-31189f5630f1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.680585 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" podStartSLOduration=1.680566229 podStartE2EDuration="1.680566229s" podCreationTimestamp="2026-02-02 07:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:32:11.680290712 +0000 UTC m=+305.101494070" watchObservedRunningTime="2026-02-02 07:32:11.680566229 +0000 UTC m=+305.101769597" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.730451 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dnq6k"] Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.731854 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.734427 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.747091 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnq6k"] Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.871656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphb9\" (UniqueName: \"kubernetes.io/projected/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-kube-api-access-kphb9\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.871743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-catalog-content\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.871786 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-utilities\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.926482 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qkzng"] Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.927336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.929186 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.943868 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkzng"] Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.972673 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphb9\" (UniqueName: \"kubernetes.io/projected/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-kube-api-access-kphb9\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.972765 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-catalog-content\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.972812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-utilities\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.973225 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-catalog-content\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.973272 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-utilities\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:11 crc kubenswrapper[4730]: I0202 07:32:11.992934 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphb9\" (UniqueName: \"kubernetes.io/projected/4f6f5114-b6e2-4843-8ff5-ac67569c1dbd-kube-api-access-kphb9\") pod \"redhat-marketplace-dnq6k\" (UID: \"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd\") " pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.074253 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-catalog-content\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.074321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-utilities\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.074366 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5g5d\" (UniqueName: \"kubernetes.io/projected/b3084604-22f9-41f3-8f77-f2d0d0bee504-kube-api-access-c5g5d\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.084565 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.178876 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-catalog-content\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.179153 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-utilities\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.179214 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5g5d\" (UniqueName: \"kubernetes.io/projected/b3084604-22f9-41f3-8f77-f2d0d0bee504-kube-api-access-c5g5d\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.180331 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-catalog-content\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.182144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3084604-22f9-41f3-8f77-f2d0d0bee504-utilities\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.211390 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5g5d\" (UniqueName: \"kubernetes.io/projected/b3084604-22f9-41f3-8f77-f2d0d0bee504-kube-api-access-c5g5d\") pod \"community-operators-qkzng\" (UID: \"b3084604-22f9-41f3-8f77-f2d0d0bee504\") " pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.242434 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.483136 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnq6k"] Feb 02 07:32:12 crc kubenswrapper[4730]: W0202 07:32:12.489831 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f6f5114_b6e2_4843_8ff5_ac67569c1dbd.slice/crio-fc9b7740bc6748a82a03dfd3ca0f0ffc87deb57b232bdfc7f302b8e9d663dac1 WatchSource:0}: Error finding container fc9b7740bc6748a82a03dfd3ca0f0ffc87deb57b232bdfc7f302b8e9d663dac1: Status 404 returned error can't find the container with id fc9b7740bc6748a82a03dfd3ca0f0ffc87deb57b232bdfc7f302b8e9d663dac1 Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.646274 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkzng"] Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.661779 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f6f5114-b6e2-4843-8ff5-ac67569c1dbd" containerID="1aaa920a793b9f008e075f40aa90d4bdb0511a11cdae6593a3b460f8ef67860c" exitCode=0 Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.661897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnq6k" event={"ID":"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd","Type":"ContainerDied","Data":"1aaa920a793b9f008e075f40aa90d4bdb0511a11cdae6593a3b460f8ef67860c"} Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.662463 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnq6k" event={"ID":"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd","Type":"ContainerStarted","Data":"fc9b7740bc6748a82a03dfd3ca0f0ffc87deb57b232bdfc7f302b8e9d663dac1"} Feb 02 07:32:12 crc kubenswrapper[4730]: I0202 07:32:12.665473 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gsgln" Feb 02 07:32:12 crc kubenswrapper[4730]: W0202 07:32:12.692105 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3084604_22f9_41f3_8f77_f2d0d0bee504.slice/crio-28ae1f8c7e2a0b089baa546ededab1a6cb624646168fddbf6264b146c105d735 WatchSource:0}: Error finding container 28ae1f8c7e2a0b089baa546ededab1a6cb624646168fddbf6264b146c105d735: Status 404 returned error can't find the container with id 28ae1f8c7e2a0b089baa546ededab1a6cb624646168fddbf6264b146c105d735 Feb 02 07:32:13 crc kubenswrapper[4730]: I0202 07:32:13.669337 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f6f5114-b6e2-4843-8ff5-ac67569c1dbd" containerID="95a360942e98232c5425f559eff6580111c6c1716069e97e61360f8e50bc533e" exitCode=0 Feb 02 07:32:13 crc kubenswrapper[4730]: I0202 07:32:13.669475 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnq6k" event={"ID":"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd","Type":"ContainerDied","Data":"95a360942e98232c5425f559eff6580111c6c1716069e97e61360f8e50bc533e"} Feb 02 07:32:13 crc kubenswrapper[4730]: I0202 07:32:13.671025 4730 generic.go:334] "Generic (PLEG): container finished" podID="b3084604-22f9-41f3-8f77-f2d0d0bee504" containerID="0ef6fb981fe4b16d65543161de42637bfef35b1dd130bc01daf0697af783ce8e" exitCode=0 Feb 02 07:32:13 crc kubenswrapper[4730]: I0202 07:32:13.671058 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkzng" event={"ID":"b3084604-22f9-41f3-8f77-f2d0d0bee504","Type":"ContainerDied","Data":"0ef6fb981fe4b16d65543161de42637bfef35b1dd130bc01daf0697af783ce8e"} Feb 02 07:32:13 crc kubenswrapper[4730]: I0202 07:32:13.671083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkzng" event={"ID":"b3084604-22f9-41f3-8f77-f2d0d0bee504","Type":"ContainerStarted","Data":"28ae1f8c7e2a0b089baa546ededab1a6cb624646168fddbf6264b146c105d735"} Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.332587 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.333706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.335326 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.344114 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.406856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbj8\" (UniqueName: \"kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.406931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.406982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.508384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.508702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.508756 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbj8\" (UniqueName: \"kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.509149 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.509479 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.548134 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kznfh"] Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.550002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.552381 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.554828 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbj8\" (UniqueName: \"kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8\") pod \"certified-operators-mdtqb\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.567333 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kznfh"] Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.612897 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbwq\" (UniqueName: \"kubernetes.io/projected/ace3527b-5d93-430c-8ae2-89d447f31735-kube-api-access-7wbwq\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.612946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-utilities\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.612986 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-catalog-content\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.678071 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnq6k" event={"ID":"4f6f5114-b6e2-4843-8ff5-ac67569c1dbd","Type":"ContainerStarted","Data":"86d695bfecc2f5026c7ff791044c0b9177ff79ffe65506e3225bca3a538ee44e"} Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.680030 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkzng" event={"ID":"b3084604-22f9-41f3-8f77-f2d0d0bee504","Type":"ContainerStarted","Data":"14c1409a1f8b8c37e19c8548981ed8eb03e757ab5d0babe9ac75b2a11851eb89"} Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.697172 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dnq6k" podStartSLOduration=2.30401977 podStartE2EDuration="3.697137032s" podCreationTimestamp="2026-02-02 07:32:11 +0000 UTC" firstStartedPulling="2026-02-02 07:32:12.663816917 +0000 UTC m=+306.085020265" lastFinishedPulling="2026-02-02 07:32:14.056934159 +0000 UTC m=+307.478137527" observedRunningTime="2026-02-02 07:32:14.691211883 +0000 UTC m=+308.112415231" watchObservedRunningTime="2026-02-02 07:32:14.697137032 +0000 UTC m=+308.118340380" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.697797 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.714089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-catalog-content\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.714285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbwq\" (UniqueName: \"kubernetes.io/projected/ace3527b-5d93-430c-8ae2-89d447f31735-kube-api-access-7wbwq\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.714333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-utilities\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.715082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-utilities\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.716414 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace3527b-5d93-430c-8ae2-89d447f31735-catalog-content\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.732891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbwq\" (UniqueName: \"kubernetes.io/projected/ace3527b-5d93-430c-8ae2-89d447f31735-kube-api-access-7wbwq\") pod \"redhat-operators-kznfh\" (UID: \"ace3527b-5d93-430c-8ae2-89d447f31735\") " pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.901585 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:32:14 crc kubenswrapper[4730]: I0202 07:32:14.912923 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.318773 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kznfh"] Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.688291 4730 generic.go:334] "Generic (PLEG): container finished" podID="b3084604-22f9-41f3-8f77-f2d0d0bee504" containerID="14c1409a1f8b8c37e19c8548981ed8eb03e757ab5d0babe9ac75b2a11851eb89" exitCode=0 Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.688366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkzng" event={"ID":"b3084604-22f9-41f3-8f77-f2d0d0bee504","Type":"ContainerDied","Data":"14c1409a1f8b8c37e19c8548981ed8eb03e757ab5d0babe9ac75b2a11851eb89"} Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.690659 4730 generic.go:334] "Generic (PLEG): container finished" podID="ace3527b-5d93-430c-8ae2-89d447f31735" containerID="b26d01208ad92f28d9934f5d1a8f34df8b6a4fab7c42408f33288f94b0b5f89a" exitCode=0 Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.690796 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kznfh" event={"ID":"ace3527b-5d93-430c-8ae2-89d447f31735","Type":"ContainerDied","Data":"b26d01208ad92f28d9934f5d1a8f34df8b6a4fab7c42408f33288f94b0b5f89a"} Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.690855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kznfh" event={"ID":"ace3527b-5d93-430c-8ae2-89d447f31735","Type":"ContainerStarted","Data":"13c53b979be06862ec1bbfe4ce26277e6a12104e3cba9d9ac39e472bc1d2c64c"} Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.693854 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f71e670-eb96-4321-af75-8ef24727cb13" containerID="5b7d5860ec34f0b11a5dd01a28079715b8086e1bb7f7653abe0bfb70c4ed7496" exitCode=0 Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.694192 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerDied","Data":"5b7d5860ec34f0b11a5dd01a28079715b8086e1bb7f7653abe0bfb70c4ed7496"} Feb 02 07:32:15 crc kubenswrapper[4730]: I0202 07:32:15.694244 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerStarted","Data":"2095e14570c704a22af1c32a9b711c7580dd116cde2fc797b2317183676a36b3"} Feb 02 07:32:16 crc kubenswrapper[4730]: I0202 07:32:16.702816 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkzng" event={"ID":"b3084604-22f9-41f3-8f77-f2d0d0bee504","Type":"ContainerStarted","Data":"35520fded1af31da0d5691101aa7b1db9817478a59eaf3b3511dc78ef3ee05a9"} Feb 02 07:32:16 crc kubenswrapper[4730]: I0202 07:32:16.706483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kznfh" event={"ID":"ace3527b-5d93-430c-8ae2-89d447f31735","Type":"ContainerStarted","Data":"7a09766b7706c80a16ac9e86e1ac87ad38743d6b7e165a72dfd3a89cb06a332f"} Feb 02 07:32:16 crc kubenswrapper[4730]: I0202 07:32:16.709203 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f71e670-eb96-4321-af75-8ef24727cb13" containerID="c32334867ffd8fce441e091cc52712c123899d159d6813a9854d74c1ebf274aa" exitCode=0 Feb 02 07:32:16 crc kubenswrapper[4730]: I0202 07:32:16.709242 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerDied","Data":"c32334867ffd8fce441e091cc52712c123899d159d6813a9854d74c1ebf274aa"} Feb 02 07:32:16 crc kubenswrapper[4730]: I0202 07:32:16.725083 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qkzng" podStartSLOduration=3.33334145 podStartE2EDuration="5.725066102s" podCreationTimestamp="2026-02-02 07:32:11 +0000 UTC" firstStartedPulling="2026-02-02 07:32:13.672629929 +0000 UTC m=+307.093833277" lastFinishedPulling="2026-02-02 07:32:16.064354581 +0000 UTC m=+309.485557929" observedRunningTime="2026-02-02 07:32:16.721832746 +0000 UTC m=+310.143036114" watchObservedRunningTime="2026-02-02 07:32:16.725066102 +0000 UTC m=+310.146269450" Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.717394 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.719434 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.719494 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1578f37584f251605889c87e5dee519a9c9dda0b274e4e5261a248e74b56dcd1" exitCode=137 Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.719568 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1578f37584f251605889c87e5dee519a9c9dda0b274e4e5261a248e74b56dcd1"} Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.719608 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7b09f8edfaf686c8f182bb29862cd95482fc1860f26f00763050aa2006cd991"} Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.719635 4730 scope.go:117] "RemoveContainer" containerID="8d9566ca36c314af505cbe23a33b0d8449dcd8ea07e3203b7b285dd0aa4a3f26" Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.724186 4730 generic.go:334] "Generic (PLEG): container finished" podID="ace3527b-5d93-430c-8ae2-89d447f31735" containerID="7a09766b7706c80a16ac9e86e1ac87ad38743d6b7e165a72dfd3a89cb06a332f" exitCode=0 Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.724281 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kznfh" event={"ID":"ace3527b-5d93-430c-8ae2-89d447f31735","Type":"ContainerDied","Data":"7a09766b7706c80a16ac9e86e1ac87ad38743d6b7e165a72dfd3a89cb06a332f"} Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.728226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerStarted","Data":"9a575a1f467c09d94610a606fb8ef4456d6452919edfe1a4413f2822430f47c7"} Feb 02 07:32:17 crc kubenswrapper[4730]: I0202 07:32:17.770305 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mdtqb" podStartSLOduration=2.367033705 podStartE2EDuration="3.770289809s" podCreationTimestamp="2026-02-02 07:32:14 +0000 UTC" firstStartedPulling="2026-02-02 07:32:15.696062429 +0000 UTC m=+309.117265777" lastFinishedPulling="2026-02-02 07:32:17.099318513 +0000 UTC m=+310.520521881" observedRunningTime="2026-02-02 07:32:17.767376011 +0000 UTC m=+311.188579359" watchObservedRunningTime="2026-02-02 07:32:17.770289809 +0000 UTC m=+311.191493157" Feb 02 07:32:18 crc kubenswrapper[4730]: I0202 07:32:18.734843 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 07:32:18 crc kubenswrapper[4730]: I0202 07:32:18.738509 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kznfh" event={"ID":"ace3527b-5d93-430c-8ae2-89d447f31735","Type":"ContainerStarted","Data":"018a018a3b17d59fa6c68b6ec25914730369d3cb0ca839503ce83b893c8a7979"} Feb 02 07:32:18 crc kubenswrapper[4730]: I0202 07:32:18.757436 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kznfh" podStartSLOduration=2.303637999 podStartE2EDuration="4.757416451s" podCreationTimestamp="2026-02-02 07:32:14 +0000 UTC" firstStartedPulling="2026-02-02 07:32:15.692003521 +0000 UTC m=+309.113206869" lastFinishedPulling="2026-02-02 07:32:18.145781973 +0000 UTC m=+311.566985321" observedRunningTime="2026-02-02 07:32:18.756768074 +0000 UTC m=+312.177971462" watchObservedRunningTime="2026-02-02 07:32:18.757416451 +0000 UTC m=+312.178619799" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.085310 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.085592 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.125038 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.243507 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.243565 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.283772 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.368456 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.797852 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dnq6k" Feb 02 07:32:22 crc kubenswrapper[4730]: I0202 07:32:22.808726 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qkzng" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.699151 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.699440 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.766234 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.814290 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.913251 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.913331 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:24 crc kubenswrapper[4730]: I0202 07:32:24.958968 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:25 crc kubenswrapper[4730]: I0202 07:32:25.829064 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kznfh" Feb 02 07:32:26 crc kubenswrapper[4730]: I0202 07:32:26.776458 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:32:26 crc kubenswrapper[4730]: I0202 07:32:26.780742 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:32:32 crc kubenswrapper[4730]: I0202 07:32:32.373953 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.462432 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.472380 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9f6dc"] Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.473379 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.491218 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9f6dc"] Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586303 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-certificates\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-tls\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586372 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6090-ad7f-47b1-a763-6385007125ef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586407 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aaf6090-ad7f-47b1-a763-6385007125ef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586606 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-trusted-ca\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586735 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbd7\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-kube-api-access-dvbd7\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.586926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-bound-sa-token\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.654316 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688518 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-trusted-ca\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688574 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbd7\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-kube-api-access-dvbd7\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-bound-sa-token\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688668 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-certificates\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688688 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-tls\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6090-ad7f-47b1-a763-6385007125ef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.688738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aaf6090-ad7f-47b1-a763-6385007125ef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.689497 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aaf6090-ad7f-47b1-a763-6385007125ef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.690601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-trusted-ca\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.691053 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-certificates\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.703808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6090-ad7f-47b1-a763-6385007125ef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.710185 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-registry-tls\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.713539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbd7\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-kube-api-access-dvbd7\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.721591 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aaf6090-ad7f-47b1-a763-6385007125ef-bound-sa-token\") pod \"image-registry-66df7c8f76-9f6dc\" (UID: \"4aaf6090-ad7f-47b1-a763-6385007125ef\") " pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:34 crc kubenswrapper[4730]: I0202 07:32:34.796787 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:35 crc kubenswrapper[4730]: I0202 07:32:35.261240 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9f6dc"] Feb 02 07:32:35 crc kubenswrapper[4730]: W0202 07:32:35.263656 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf6090_ad7f_47b1_a763_6385007125ef.slice/crio-c87ce5de898e013a2f6fb5a3282fe301849c1bfb96cb2a6439a76081cf3ff33c WatchSource:0}: Error finding container c87ce5de898e013a2f6fb5a3282fe301849c1bfb96cb2a6439a76081cf3ff33c: Status 404 returned error can't find the container with id c87ce5de898e013a2f6fb5a3282fe301849c1bfb96cb2a6439a76081cf3ff33c Feb 02 07:32:35 crc kubenswrapper[4730]: I0202 07:32:35.837811 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" event={"ID":"4aaf6090-ad7f-47b1-a763-6385007125ef","Type":"ContainerStarted","Data":"b867bcf8a76de6991fbeb85460554a65d1dfc1f86c9a8e41863b4531b6f5ceba"} Feb 02 07:32:35 crc kubenswrapper[4730]: I0202 07:32:35.839001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:35 crc kubenswrapper[4730]: I0202 07:32:35.839091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" event={"ID":"4aaf6090-ad7f-47b1-a763-6385007125ef","Type":"ContainerStarted","Data":"c87ce5de898e013a2f6fb5a3282fe301849c1bfb96cb2a6439a76081cf3ff33c"} Feb 02 07:32:35 crc kubenswrapper[4730]: I0202 07:32:35.860661 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" podStartSLOduration=1.86064428 podStartE2EDuration="1.86064428s" podCreationTimestamp="2026-02-02 07:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:32:35.858081431 +0000 UTC m=+329.279284789" watchObservedRunningTime="2026-02-02 07:32:35.86064428 +0000 UTC m=+329.281847628" Feb 02 07:32:54 crc kubenswrapper[4730]: I0202 07:32:54.808908 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9f6dc" Feb 02 07:32:54 crc kubenswrapper[4730]: I0202 07:32:54.880752 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:32:57 crc kubenswrapper[4730]: I0202 07:32:57.660939 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:32:57 crc kubenswrapper[4730]: I0202 07:32:57.661536 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.496731 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" podUID="ba450d82-c017-40a1-a790-50bcc3a8ce20" containerName="oauth-openshift" containerID="cri-o://7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94" gracePeriod=15 Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.859928 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.900952 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58777685cc-rzlrl"] Feb 02 07:32:59 crc kubenswrapper[4730]: E0202 07:32:59.901243 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba450d82-c017-40a1-a790-50bcc3a8ce20" containerName="oauth-openshift" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.901262 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba450d82-c017-40a1-a790-50bcc3a8ce20" containerName="oauth-openshift" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.901363 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba450d82-c017-40a1-a790-50bcc3a8ce20" containerName="oauth-openshift" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.901772 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.914512 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58777685cc-rzlrl"] Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.952897 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.952955 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.952979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953009 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953036 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953053 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953081 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953102 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953170 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953187 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953214 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953250 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4d2\" (UniqueName: \"kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953270 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session\") pod \"ba450d82-c017-40a1-a790-50bcc3a8ce20\" (UID: \"ba450d82-c017-40a1-a790-50bcc3a8ce20\") " Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953523 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.953886 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.954100 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.954485 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.954593 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.958501 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.958610 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.958653 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.958824 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.959968 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.960032 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.960552 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.965246 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.965591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2" (OuterVolumeSpecName: "kube-api-access-6v4d2") pod "ba450d82-c017-40a1-a790-50bcc3a8ce20" (UID: "ba450d82-c017-40a1-a790-50bcc3a8ce20"). InnerVolumeSpecName "kube-api-access-6v4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.978564 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba450d82-c017-40a1-a790-50bcc3a8ce20" containerID="7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94" exitCode=0 Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.978616 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" event={"ID":"ba450d82-c017-40a1-a790-50bcc3a8ce20","Type":"ContainerDied","Data":"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94"} Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.978659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" event={"ID":"ba450d82-c017-40a1-a790-50bcc3a8ce20","Type":"ContainerDied","Data":"864c5f76351878dfe6461a7a21e34db033dccd28229ef546dc60451c1ac1ae93"} Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.978684 4730 scope.go:117] "RemoveContainer" containerID="7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94" Feb 02 07:32:59 crc kubenswrapper[4730]: I0202 07:32:59.978827 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-q2pwz" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.006417 4730 scope.go:117] "RemoveContainer" containerID="7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94" Feb 02 07:33:00 crc kubenswrapper[4730]: E0202 07:33:00.006811 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94\": container with ID starting with 7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94 not found: ID does not exist" containerID="7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.006857 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94"} err="failed to get container status \"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94\": rpc error: code = NotFound desc = could not find container \"7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94\": container with ID starting with 7df72949a48e5f838fa86144b0c876fe54b78d6f8598fdef828eb76438753d94 not found: ID does not exist" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.016221 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.022145 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-q2pwz"] Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054279 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054300 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-dir\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054320 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054341 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-policies\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054462 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054487 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054530 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-login\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054572 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054676 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59t5\" (UniqueName: \"kubernetes.io/projected/d000749d-ce49-4267-a7ba-d2f605adf0b4-kube-api-access-k59t5\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054751 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-session\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.054925 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-error\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055100 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055131 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055155 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055201 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055220 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055238 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055257 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055279 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba450d82-c017-40a1-a790-50bcc3a8ce20-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055300 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055318 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055339 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055357 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055374 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4d2\" (UniqueName: \"kubernetes.io/projected/ba450d82-c017-40a1-a790-50bcc3a8ce20-kube-api-access-6v4d2\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.055395 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba450d82-c017-40a1-a790-50bcc3a8ce20-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.156622 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.156713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-policies\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.156882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.157777 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-policies\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.156912 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.157920 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.157945 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-login\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158354 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59t5\" (UniqueName: \"kubernetes.io/projected/d000749d-ce49-4267-a7ba-d2f605adf0b4-kube-api-access-k59t5\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-session\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-error\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.159021 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.159063 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.159096 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-dir\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.159208 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d000749d-ce49-4267-a7ba-d2f605adf0b4-audit-dir\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.160266 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.158913 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-service-ca\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.160798 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.160861 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-router-certs\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.161609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.161944 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-login\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.162033 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-session\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.162461 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-template-error\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.163300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.179070 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d000749d-ce49-4267-a7ba-d2f605adf0b4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.186077 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59t5\" (UniqueName: \"kubernetes.io/projected/d000749d-ce49-4267-a7ba-d2f605adf0b4-kube-api-access-k59t5\") pod \"oauth-openshift-58777685cc-rzlrl\" (UID: \"d000749d-ce49-4267-a7ba-d2f605adf0b4\") " pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.216235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.491891 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58777685cc-rzlrl"] Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.989246 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" event={"ID":"d000749d-ce49-4267-a7ba-d2f605adf0b4","Type":"ContainerStarted","Data":"3ee47af3bb252ea3e8b0bdd602eb530b1ebf584c390cdcafa681793874fc835d"} Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.989330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" event={"ID":"d000749d-ce49-4267-a7ba-d2f605adf0b4","Type":"ContainerStarted","Data":"52cac9f133fca6185af1818985fa64f568ad3969807846cd3b4fb83be9225aa3"} Feb 02 07:33:00 crc kubenswrapper[4730]: I0202 07:33:00.991215 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:01 crc kubenswrapper[4730]: I0202 07:33:01.019630 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" podStartSLOduration=27.01961301 podStartE2EDuration="27.01961301s" podCreationTimestamp="2026-02-02 07:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:33:01.01960289 +0000 UTC m=+354.440806268" watchObservedRunningTime="2026-02-02 07:33:01.01961301 +0000 UTC m=+354.440816358" Feb 02 07:33:01 crc kubenswrapper[4730]: I0202 07:33:01.168918 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58777685cc-rzlrl" Feb 02 07:33:01 crc kubenswrapper[4730]: I0202 07:33:01.259717 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba450d82-c017-40a1-a790-50bcc3a8ce20" path="/var/lib/kubelet/pods/ba450d82-c017-40a1-a790-50bcc3a8ce20/volumes" Feb 02 07:33:19 crc kubenswrapper[4730]: I0202 07:33:19.933673 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" podUID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" containerName="registry" containerID="cri-o://acddef8fe01493bff8843f00235f021ba11b0b94ada3d47b6a0878ff2817b501" gracePeriod=30 Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.110263 4730 generic.go:334] "Generic (PLEG): container finished" podID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" containerID="acddef8fe01493bff8843f00235f021ba11b0b94ada3d47b6a0878ff2817b501" exitCode=0 Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.110327 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" event={"ID":"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb","Type":"ContainerDied","Data":"acddef8fe01493bff8843f00235f021ba11b0b94ada3d47b6a0878ff2817b501"} Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.367602 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.552896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.552960 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553024 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553089 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553226 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553265 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553328 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk7xn\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.553402 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted\") pod \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\" (UID: \"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb\") " Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.554805 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.554818 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.560526 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.561052 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn" (OuterVolumeSpecName: "kube-api-access-sk7xn") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "kube-api-access-sk7xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.562370 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.563534 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.566515 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.583594 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" (UID: "55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654674 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654729 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654744 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk7xn\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-kube-api-access-sk7xn\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654758 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654771 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654781 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:20 crc kubenswrapper[4730]: I0202 07:33:20.654792 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.119414 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" event={"ID":"55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb","Type":"ContainerDied","Data":"3a7399d1f5a19a35fe75d97efe9a9b4b94d71ce099bffa61c589c7902ec40007"} Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.119470 4730 scope.go:117] "RemoveContainer" containerID="acddef8fe01493bff8843f00235f021ba11b0b94ada3d47b6a0878ff2817b501" Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.119487 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xmtgb" Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.169437 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.178510 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xmtgb"] Feb 02 07:33:21 crc kubenswrapper[4730]: I0202 07:33:21.259752 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" path="/var/lib/kubelet/pods/55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb/volumes" Feb 02 07:33:27 crc kubenswrapper[4730]: I0202 07:33:27.660534 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:27 crc kubenswrapper[4730]: I0202 07:33:27.661154 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:57 crc kubenswrapper[4730]: I0202 07:33:57.660667 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:57 crc kubenswrapper[4730]: I0202 07:33:57.661294 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:57 crc kubenswrapper[4730]: I0202 07:33:57.661360 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:33:57 crc kubenswrapper[4730]: I0202 07:33:57.662111 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:33:57 crc kubenswrapper[4730]: I0202 07:33:57.662263 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4" gracePeriod=600 Feb 02 07:33:58 crc kubenswrapper[4730]: I0202 07:33:58.366713 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4" exitCode=0 Feb 02 07:33:58 crc kubenswrapper[4730]: I0202 07:33:58.366798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4"} Feb 02 07:33:58 crc kubenswrapper[4730]: I0202 07:33:58.367346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd"} Feb 02 07:33:58 crc kubenswrapper[4730]: I0202 07:33:58.367383 4730 scope.go:117] "RemoveContainer" containerID="0400f1c6c39544e46478e788fa03ee1707cbc15dd56f10ce7aeb6fd7f2436df1" Feb 02 07:35:57 crc kubenswrapper[4730]: I0202 07:35:57.660320 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:35:57 crc kubenswrapper[4730]: I0202 07:35:57.661117 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:36:07 crc kubenswrapper[4730]: I0202 07:36:07.465795 4730 scope.go:117] "RemoveContainer" containerID="15ee120a70ddfa2e03c90f7e104a948a13e4338fea2d8b1f05bc2aaad1a66305" Feb 02 07:36:07 crc kubenswrapper[4730]: I0202 07:36:07.496009 4730 scope.go:117] "RemoveContainer" containerID="f7d2d0d90705c95146d9fc4fb9cc0d4e79bbfa38497ca31dad23d1b116cbe601" Feb 02 07:36:07 crc kubenswrapper[4730]: I0202 07:36:07.517412 4730 scope.go:117] "RemoveContainer" containerID="d3262ce64287acf1d323030d25c834e80386a99a74488ea47de8ee2b5829a942" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.506275 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d"] Feb 02 07:36:23 crc kubenswrapper[4730]: E0202 07:36:23.507099 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" containerName="registry" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.507114 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" containerName="registry" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.507245 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="55eaa89c-c6a4-4c2c-8218-f4235cdcc6fb" containerName="registry" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.507709 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.516030 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qsm4t"] Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.516900 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qsm4t" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.525219 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.525615 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.525926 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zfnnz" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.527041 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-g2nm2" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.528711 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d"] Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.543072 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6k6fw"] Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.544180 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.546021 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jhxmf" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.547797 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snctq\" (UniqueName: \"kubernetes.io/projected/b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9-kube-api-access-snctq\") pod \"cert-manager-cainjector-cf98fcc89-vmn4d\" (UID: \"b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.547830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lch\" (UniqueName: \"kubernetes.io/projected/224bf167-43f9-4c9c-8b93-f607669dd5a5-kube-api-access-z2lch\") pod \"cert-manager-858654f9db-qsm4t\" (UID: \"224bf167-43f9-4c9c-8b93-f607669dd5a5\") " pod="cert-manager/cert-manager-858654f9db-qsm4t" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.552843 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qsm4t"] Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.568397 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6k6fw"] Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.648674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lch\" (UniqueName: \"kubernetes.io/projected/224bf167-43f9-4c9c-8b93-f607669dd5a5-kube-api-access-z2lch\") pod \"cert-manager-858654f9db-qsm4t\" (UID: \"224bf167-43f9-4c9c-8b93-f607669dd5a5\") " pod="cert-manager/cert-manager-858654f9db-qsm4t" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.648822 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlff2\" (UniqueName: \"kubernetes.io/projected/ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78-kube-api-access-tlff2\") pod \"cert-manager-webhook-687f57d79b-6k6fw\" (UID: \"ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.648858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snctq\" (UniqueName: \"kubernetes.io/projected/b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9-kube-api-access-snctq\") pod \"cert-manager-cainjector-cf98fcc89-vmn4d\" (UID: \"b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.670037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lch\" (UniqueName: \"kubernetes.io/projected/224bf167-43f9-4c9c-8b93-f607669dd5a5-kube-api-access-z2lch\") pod \"cert-manager-858654f9db-qsm4t\" (UID: \"224bf167-43f9-4c9c-8b93-f607669dd5a5\") " pod="cert-manager/cert-manager-858654f9db-qsm4t" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.672333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snctq\" (UniqueName: \"kubernetes.io/projected/b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9-kube-api-access-snctq\") pod \"cert-manager-cainjector-cf98fcc89-vmn4d\" (UID: \"b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.750058 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlff2\" (UniqueName: \"kubernetes.io/projected/ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78-kube-api-access-tlff2\") pod \"cert-manager-webhook-687f57d79b-6k6fw\" (UID: \"ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.775954 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlff2\" (UniqueName: \"kubernetes.io/projected/ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78-kube-api-access-tlff2\") pod \"cert-manager-webhook-687f57d79b-6k6fw\" (UID: \"ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.838214 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.861192 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qsm4t" Feb 02 07:36:23 crc kubenswrapper[4730]: I0202 07:36:23.872014 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.034653 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d"] Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.044306 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.110918 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6k6fw"] Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.145340 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qsm4t"] Feb 02 07:36:24 crc kubenswrapper[4730]: W0202 07:36:24.149447 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod224bf167_43f9_4c9c_8b93_f607669dd5a5.slice/crio-cea64b220d044b471cac1ce98be6aace41209701f8b42d33947450e26b8a408f WatchSource:0}: Error finding container cea64b220d044b471cac1ce98be6aace41209701f8b42d33947450e26b8a408f: Status 404 returned error can't find the container with id cea64b220d044b471cac1ce98be6aace41209701f8b42d33947450e26b8a408f Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.322204 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qsm4t" event={"ID":"224bf167-43f9-4c9c-8b93-f607669dd5a5","Type":"ContainerStarted","Data":"cea64b220d044b471cac1ce98be6aace41209701f8b42d33947450e26b8a408f"} Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.323648 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" event={"ID":"b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9","Type":"ContainerStarted","Data":"c94517700b069d49678fcb36c46998349d0e6be3436ff00f04f3ed2f6a3ac8e8"} Feb 02 07:36:24 crc kubenswrapper[4730]: I0202 07:36:24.324872 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" event={"ID":"ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78","Type":"ContainerStarted","Data":"83ac775d7d86bd02ed9d29271685590cb0b6e955227585395d5d5666a894261c"} Feb 02 07:36:27 crc kubenswrapper[4730]: I0202 07:36:27.660011 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:36:27 crc kubenswrapper[4730]: I0202 07:36:27.660404 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.353264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qsm4t" event={"ID":"224bf167-43f9-4c9c-8b93-f607669dd5a5","Type":"ContainerStarted","Data":"887f846dc5383b1f0e550578419d1b515789ffc9f2dddd31309a805b86b88ad5"} Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.356420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" event={"ID":"b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9","Type":"ContainerStarted","Data":"36b8ab2d793ffeb51f8422c156867f2d481f0d987b7f512f8f4f95f11833519e"} Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.359223 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" event={"ID":"ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78","Type":"ContainerStarted","Data":"2282954d82baa95f41c7d7ef1309a1433e821fa75abe09b5fe17fe0e029a4b35"} Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.359650 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.382438 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qsm4t" podStartSLOduration=1.968254381 podStartE2EDuration="5.382411164s" podCreationTimestamp="2026-02-02 07:36:23 +0000 UTC" firstStartedPulling="2026-02-02 07:36:24.151274621 +0000 UTC m=+557.572477969" lastFinishedPulling="2026-02-02 07:36:27.565431394 +0000 UTC m=+560.986634752" observedRunningTime="2026-02-02 07:36:28.375320956 +0000 UTC m=+561.796524334" watchObservedRunningTime="2026-02-02 07:36:28.382411164 +0000 UTC m=+561.803614542" Feb 02 07:36:28 crc kubenswrapper[4730]: I0202 07:36:28.421003 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" podStartSLOduration=2.044068489 podStartE2EDuration="5.420981716s" podCreationTimestamp="2026-02-02 07:36:23 +0000 UTC" firstStartedPulling="2026-02-02 07:36:24.115835312 +0000 UTC m=+557.537038660" lastFinishedPulling="2026-02-02 07:36:27.492748539 +0000 UTC m=+560.913951887" observedRunningTime="2026-02-02 07:36:28.400821192 +0000 UTC m=+561.822024620" watchObservedRunningTime="2026-02-02 07:36:28.420981716 +0000 UTC m=+561.842185074" Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.789357 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vmn4d" podStartSLOduration=7.341054475 podStartE2EDuration="10.789328612s" podCreationTimestamp="2026-02-02 07:36:23 +0000 UTC" firstStartedPulling="2026-02-02 07:36:24.044058541 +0000 UTC m=+557.465261889" lastFinishedPulling="2026-02-02 07:36:27.492332658 +0000 UTC m=+560.913536026" observedRunningTime="2026-02-02 07:36:28.431316319 +0000 UTC m=+561.852519757" watchObservedRunningTime="2026-02-02 07:36:33.789328612 +0000 UTC m=+567.210532000" Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.792558 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54z89"] Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793482 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793569 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="northd" containerID="cri-o://f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793571 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="nbdb" containerID="cri-o://146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793624 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-controller" containerID="cri-o://b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793468 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-node" containerID="cri-o://a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793733 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-acl-logging" containerID="cri-o://13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.793842 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="sbdb" containerID="cri-o://1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.848644 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" containerID="cri-o://e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" gracePeriod=30 Feb 02 07:36:33 crc kubenswrapper[4730]: I0202 07:36:33.876137 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-6k6fw" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.129578 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba7d1b84_4596_463a_bc77_c365c3c969b0.slice/crio-conmon-146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.159980 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/3.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.162196 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovn-acl-logging/0.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.162680 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovn-controller/0.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.163295 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.209993 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210029 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210077 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210077 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210126 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210134 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210175 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210190 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210196 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210210 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210217 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log" (OuterVolumeSpecName: "node-log") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210250 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210260 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210283 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210322 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210342 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210371 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210389 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210411 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210452 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210474 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210496 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dbxs\" (UniqueName: \"kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210511 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket\") pod \"ba7d1b84-4596-463a-bc77-c365c3c969b0\" (UID: \"ba7d1b84-4596-463a-bc77-c365c3c969b0\") " Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210674 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210684 4730 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210717 4730 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210726 4730 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210735 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210744 4730 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210751 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210848 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket" (OuterVolumeSpecName: "log-socket") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.210944 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211000 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211394 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211443 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211465 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash" (OuterVolumeSpecName: "host-slash") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211482 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.211890 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.213665 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.214243 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.216959 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.218859 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs" (OuterVolumeSpecName: "kube-api-access-5dbxs") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "kube-api-access-5dbxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.219394 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6kkbj"] Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.219721 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.219801 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.219857 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.219921 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.219969 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="nbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220015 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="nbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220081 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220128 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220247 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-node" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220316 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-node" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220367 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220414 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220466 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="northd" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220529 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="northd" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220589 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-acl-logging" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220634 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-acl-logging" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220701 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="sbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220747 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="sbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220790 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kubecfg-setup" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220836 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kubecfg-setup" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.220889 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.220938 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221064 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221115 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="nbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221176 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-node" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221234 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221282 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221331 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221375 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-acl-logging" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221445 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="northd" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221490 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="sbdb" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221538 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovn-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221588 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.221730 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221783 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.221830 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.221875 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.222008 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerName="ovnkube-controller" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.223486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.234938 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ba7d1b84-4596-463a-bc77-c365c3c969b0" (UID: "ba7d1b84-4596-463a-bc77-c365c3c969b0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311797 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-log-socket\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311872 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-netd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311899 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-node-log\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311923 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-slash\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311950 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-systemd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.311978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-netns\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312187 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312297 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-config\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312388 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-systemd-units\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-kubelet\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312520 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-etc-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312634 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-env-overrides\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312692 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-var-lib-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312714 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-script-lib\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312742 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-ovn\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312776 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312802 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20832e84-d371-4d98-a0bc-828a69d2adaa-ovn-node-metrics-cert\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khqm\" (UniqueName: \"kubernetes.io/projected/20832e84-d371-4d98-a0bc-828a69d2adaa-kube-api-access-8khqm\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312888 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-bin\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312978 4730 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.312996 4730 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313009 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313022 4730 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313033 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313046 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313057 4730 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313067 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313076 4730 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313148 4730 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313193 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba7d1b84-4596-463a-bc77-c365c3c969b0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313208 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dbxs\" (UniqueName: \"kubernetes.io/projected/ba7d1b84-4596-463a-bc77-c365c3c969b0-kube-api-access-5dbxs\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.313223 4730 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba7d1b84-4596-463a-bc77-c365c3c969b0-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.398396 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovnkube-controller/3.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.402477 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovn-acl-logging/0.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403104 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54z89_ba7d1b84-4596-463a-bc77-c365c3c969b0/ovn-controller/0.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403783 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403819 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403839 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403854 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403871 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403879 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403938 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403883 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" exitCode=0 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404077 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" exitCode=143 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404147 4730 generic.go:334] "Generic (PLEG): container finished" podID="ba7d1b84-4596-463a-bc77-c365c3c969b0" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" exitCode=143 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.403986 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404103 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404538 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404717 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404745 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404756 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404769 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404779 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404790 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404801 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404811 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404828 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404850 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404863 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404875 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404886 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404897 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404969 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404984 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.404994 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405009 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405021 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405057 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405070 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405081 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405093 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405103 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405116 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405126 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405136 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405147 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405157 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54z89" event={"ID":"ba7d1b84-4596-463a-bc77-c365c3c969b0","Type":"ContainerDied","Data":"fcb185b3ef8f66d93abcb2be561742c7daf41a118df92c2def7f3d16e62b87a9"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405221 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405233 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405246 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405257 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405267 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405279 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405290 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405300 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405311 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.405321 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.410467 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/2.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.411458 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/1.log" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.411527 4730 generic.go:334] "Generic (PLEG): container finished" podID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" containerID="2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74" exitCode=2 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.411582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerDied","Data":"2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.411614 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57"} Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.412467 4730 scope.go:117] "RemoveContainer" containerID="2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.412997 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zp8tp_openshift-multus(00b75ed7-302d-4f21-9c20-6ecab241b7b4)\"" pod="openshift-multus/multus-zp8tp" podUID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.415636 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-env-overrides\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.415929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-var-lib-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.415984 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-ovn\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416019 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-script-lib\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416094 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20832e84-d371-4d98-a0bc-828a69d2adaa-ovn-node-metrics-cert\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8khqm\" (UniqueName: \"kubernetes.io/projected/20832e84-d371-4d98-a0bc-828a69d2adaa-kube-api-access-8khqm\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-bin\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416275 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416317 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-netd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416334 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-ovn\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416404 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-log-socket\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-log-socket\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416458 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-var-lib-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416681 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-netd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416883 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-env-overrides\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416944 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.416989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-cni-bin\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417025 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-node-log\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-node-log\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-slash\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-slash\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417268 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-systemd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-systemd\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417495 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-netns\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417577 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417651 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-config\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-systemd-units\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417837 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-kubelet\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.417953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-etc-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-etc-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-systemd-units\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418242 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-run-netns\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418244 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-host-kubelet\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418273 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20832e84-d371-4d98-a0bc-828a69d2adaa-run-openvswitch\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.418959 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-config\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.420914 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20832e84-d371-4d98-a0bc-828a69d2adaa-ovnkube-script-lib\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.426984 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20832e84-d371-4d98-a0bc-828a69d2adaa-ovn-node-metrics-cert\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.433229 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.451207 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khqm\" (UniqueName: \"kubernetes.io/projected/20832e84-d371-4d98-a0bc-828a69d2adaa-kube-api-access-8khqm\") pod \"ovnkube-node-6kkbj\" (UID: \"20832e84-d371-4d98-a0bc-828a69d2adaa\") " pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.468680 4730 scope.go:117] "RemoveContainer" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.473649 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54z89"] Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.481907 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54z89"] Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.484497 4730 scope.go:117] "RemoveContainer" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.522395 4730 scope.go:117] "RemoveContainer" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.539064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.540521 4730 scope.go:117] "RemoveContainer" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.565791 4730 scope.go:117] "RemoveContainer" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: W0202 07:36:34.576145 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20832e84_d371_4d98_a0bc_828a69d2adaa.slice/crio-4386858cf9ff7adf63ab67b6c817f968f68de5a439cd13a37f69e8a7bf339a10 WatchSource:0}: Error finding container 4386858cf9ff7adf63ab67b6c817f968f68de5a439cd13a37f69e8a7bf339a10: Status 404 returned error can't find the container with id 4386858cf9ff7adf63ab67b6c817f968f68de5a439cd13a37f69e8a7bf339a10 Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.583366 4730 scope.go:117] "RemoveContainer" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.607507 4730 scope.go:117] "RemoveContainer" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.631501 4730 scope.go:117] "RemoveContainer" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.654150 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.654850 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.654881 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} err="failed to get container status \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.654900 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.655606 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": container with ID starting with 40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d not found: ID does not exist" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.655635 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} err="failed to get container status \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": rpc error: code = NotFound desc = could not find container \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": container with ID starting with 40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.655656 4730 scope.go:117] "RemoveContainer" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.656082 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": container with ID starting with 1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee not found: ID does not exist" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.656104 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} err="failed to get container status \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": rpc error: code = NotFound desc = could not find container \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": container with ID starting with 1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.656118 4730 scope.go:117] "RemoveContainer" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.657011 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": container with ID starting with 146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1 not found: ID does not exist" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.657038 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} err="failed to get container status \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": rpc error: code = NotFound desc = could not find container \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": container with ID starting with 146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.657057 4730 scope.go:117] "RemoveContainer" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.658049 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": container with ID starting with f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527 not found: ID does not exist" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.658078 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} err="failed to get container status \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": rpc error: code = NotFound desc = could not find container \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": container with ID starting with f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.658096 4730 scope.go:117] "RemoveContainer" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.659138 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": container with ID starting with 192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d not found: ID does not exist" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.659246 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} err="failed to get container status \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": rpc error: code = NotFound desc = could not find container \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": container with ID starting with 192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.659351 4730 scope.go:117] "RemoveContainer" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.659748 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": container with ID starting with a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f not found: ID does not exist" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.659776 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} err="failed to get container status \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": rpc error: code = NotFound desc = could not find container \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": container with ID starting with a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.659795 4730 scope.go:117] "RemoveContainer" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.660282 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": container with ID starting with 13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7 not found: ID does not exist" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.660322 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} err="failed to get container status \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": rpc error: code = NotFound desc = could not find container \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": container with ID starting with 13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.660350 4730 scope.go:117] "RemoveContainer" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.660910 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": container with ID starting with b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85 not found: ID does not exist" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.660947 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} err="failed to get container status \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": rpc error: code = NotFound desc = could not find container \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": container with ID starting with b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.660968 4730 scope.go:117] "RemoveContainer" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: E0202 07:36:34.661460 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": container with ID starting with 3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46 not found: ID does not exist" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.661513 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} err="failed to get container status \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": rpc error: code = NotFound desc = could not find container \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": container with ID starting with 3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.661540 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.661933 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} err="failed to get container status \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.661963 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.662843 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} err="failed to get container status \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": rpc error: code = NotFound desc = could not find container \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": container with ID starting with 40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.662871 4730 scope.go:117] "RemoveContainer" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.663271 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} err="failed to get container status \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": rpc error: code = NotFound desc = could not find container \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": container with ID starting with 1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.663297 4730 scope.go:117] "RemoveContainer" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.663911 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} err="failed to get container status \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": rpc error: code = NotFound desc = could not find container \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": container with ID starting with 146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.663937 4730 scope.go:117] "RemoveContainer" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.664287 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} err="failed to get container status \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": rpc error: code = NotFound desc = could not find container \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": container with ID starting with f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.664329 4730 scope.go:117] "RemoveContainer" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.664739 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} err="failed to get container status \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": rpc error: code = NotFound desc = could not find container \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": container with ID starting with 192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.664769 4730 scope.go:117] "RemoveContainer" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665000 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} err="failed to get container status \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": rpc error: code = NotFound desc = could not find container \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": container with ID starting with a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665025 4730 scope.go:117] "RemoveContainer" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665265 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} err="failed to get container status \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": rpc error: code = NotFound desc = could not find container \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": container with ID starting with 13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665290 4730 scope.go:117] "RemoveContainer" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665494 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} err="failed to get container status \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": rpc error: code = NotFound desc = could not find container \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": container with ID starting with b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665519 4730 scope.go:117] "RemoveContainer" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665737 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} err="failed to get container status \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": rpc error: code = NotFound desc = could not find container \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": container with ID starting with 3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.665763 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666194 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} err="failed to get container status \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666221 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666514 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} err="failed to get container status \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": rpc error: code = NotFound desc = could not find container \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": container with ID starting with 40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666536 4730 scope.go:117] "RemoveContainer" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666749 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} err="failed to get container status \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": rpc error: code = NotFound desc = could not find container \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": container with ID starting with 1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.666773 4730 scope.go:117] "RemoveContainer" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667212 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} err="failed to get container status \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": rpc error: code = NotFound desc = could not find container \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": container with ID starting with 146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667236 4730 scope.go:117] "RemoveContainer" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667461 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} err="failed to get container status \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": rpc error: code = NotFound desc = could not find container \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": container with ID starting with f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667486 4730 scope.go:117] "RemoveContainer" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667753 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} err="failed to get container status \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": rpc error: code = NotFound desc = could not find container \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": container with ID starting with 192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.667775 4730 scope.go:117] "RemoveContainer" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668000 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} err="failed to get container status \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": rpc error: code = NotFound desc = could not find container \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": container with ID starting with a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668018 4730 scope.go:117] "RemoveContainer" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668250 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} err="failed to get container status \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": rpc error: code = NotFound desc = could not find container \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": container with ID starting with 13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668277 4730 scope.go:117] "RemoveContainer" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668587 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} err="failed to get container status \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": rpc error: code = NotFound desc = could not find container \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": container with ID starting with b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668602 4730 scope.go:117] "RemoveContainer" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668952 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} err="failed to get container status \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": rpc error: code = NotFound desc = could not find container \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": container with ID starting with 3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.668980 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.669341 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} err="failed to get container status \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.669357 4730 scope.go:117] "RemoveContainer" containerID="40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.669733 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d"} err="failed to get container status \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": rpc error: code = NotFound desc = could not find container \"40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d\": container with ID starting with 40abdb2d43d9782780046f30446e5afc67281a5d6526ba1968b5bede49afcf1d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.669759 4730 scope.go:117] "RemoveContainer" containerID="1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670104 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee"} err="failed to get container status \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": rpc error: code = NotFound desc = could not find container \"1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee\": container with ID starting with 1b36bfd96ce30890386b5fa0da688f92e3b208ba7acf8f46b2741558ccfcfbee not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670120 4730 scope.go:117] "RemoveContainer" containerID="146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670482 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1"} err="failed to get container status \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": rpc error: code = NotFound desc = could not find container \"146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1\": container with ID starting with 146998f4b55e48146428cf20f33f2856e7e80933c79258a91ca3e4dd2af1f2b1 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670498 4730 scope.go:117] "RemoveContainer" containerID="f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670767 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527"} err="failed to get container status \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": rpc error: code = NotFound desc = could not find container \"f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527\": container with ID starting with f60c47139aba5fcd0a760aea670baf16b7ceeb93af7d1bded7856c1ec2962527 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670789 4730 scope.go:117] "RemoveContainer" containerID="192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.670993 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d"} err="failed to get container status \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": rpc error: code = NotFound desc = could not find container \"192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d\": container with ID starting with 192b3053fc6a14dbc6a7743fc663b57ef356cee732d0fc384bf53823cd41152d not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671011 4730 scope.go:117] "RemoveContainer" containerID="a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671240 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f"} err="failed to get container status \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": rpc error: code = NotFound desc = could not find container \"a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f\": container with ID starting with a28657c119609f9899f34bad176cc46face90a6717f6c6b1f6775d5a0ed6d70f not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671270 4730 scope.go:117] "RemoveContainer" containerID="13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671491 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7"} err="failed to get container status \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": rpc error: code = NotFound desc = could not find container \"13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7\": container with ID starting with 13a05023da79d7de8b9609992b13a34a427f6843b4ab2991619ddb7b785cf7b7 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671518 4730 scope.go:117] "RemoveContainer" containerID="b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671765 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85"} err="failed to get container status \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": rpc error: code = NotFound desc = could not find container \"b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85\": container with ID starting with b44603eb3c10492007685090df204a158a2a080f936fac559b5c096175ecfc85 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.671791 4730 scope.go:117] "RemoveContainer" containerID="3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.674457 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46"} err="failed to get container status \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": rpc error: code = NotFound desc = could not find container \"3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46\": container with ID starting with 3db9b9823a126c28c6ddf37011c7e50cf4acaa2077a8c451961c9b624d0b1d46 not found: ID does not exist" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.674483 4730 scope.go:117] "RemoveContainer" containerID="e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67" Feb 02 07:36:34 crc kubenswrapper[4730]: I0202 07:36:34.674699 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67"} err="failed to get container status \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": rpc error: code = NotFound desc = could not find container \"e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67\": container with ID starting with e2d98c103b8191f0733404041261b4a248f3fd9568a412c914f23b1d138ced67 not found: ID does not exist" Feb 02 07:36:35 crc kubenswrapper[4730]: I0202 07:36:35.260151 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7d1b84-4596-463a-bc77-c365c3c969b0" path="/var/lib/kubelet/pods/ba7d1b84-4596-463a-bc77-c365c3c969b0/volumes" Feb 02 07:36:35 crc kubenswrapper[4730]: I0202 07:36:35.419955 4730 generic.go:334] "Generic (PLEG): container finished" podID="20832e84-d371-4d98-a0bc-828a69d2adaa" containerID="3795bdbda1c646ae33b79d2462120b763c9533fb66ad409018c4c7dfa5d438cc" exitCode=0 Feb 02 07:36:35 crc kubenswrapper[4730]: I0202 07:36:35.419999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerDied","Data":"3795bdbda1c646ae33b79d2462120b763c9533fb66ad409018c4c7dfa5d438cc"} Feb 02 07:36:35 crc kubenswrapper[4730]: I0202 07:36:35.420084 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"4386858cf9ff7adf63ab67b6c817f968f68de5a439cd13a37f69e8a7bf339a10"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.430751 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"2fab28b748de59b9a71b61ac7fbd6870645aaa4e677e0a0faf34c3d3e090e492"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.431604 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"9af505c091b73ae89e527f61355b65089ba434c89d4dfa0bbff5545e935be616"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.431638 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"4ecd05ab59010e7a9986f84d43d173eeb719f89949549cc9d42cd9326adc87a0"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.431660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"40a2e75a3b84af6300d01f3ef06c67231049e092cfa4687179e5d00ad6186d51"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.431682 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"7b667d48e6f2a8f0d54fc752b02cca9d55d8ebbb01f00631cf7f0c86405fa347"} Feb 02 07:36:36 crc kubenswrapper[4730]: I0202 07:36:36.431706 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"80b06fa9c876f28dfb205cdfed2b5a299b9b4a9b596099b288213ea3946abc54"} Feb 02 07:36:39 crc kubenswrapper[4730]: I0202 07:36:39.455334 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"a8d448841c9cc92d930ccf28a9adfb70da49e70ed733e09ec8f647ef18e1ef22"} Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.478052 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" event={"ID":"20832e84-d371-4d98-a0bc-828a69d2adaa","Type":"ContainerStarted","Data":"3fdf22eba6b4328b2582212755ba91a7b9503dccd83bd63774f7b49a891afa5e"} Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.479324 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.479393 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.479421 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.526476 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.527317 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" podStartSLOduration=7.527274663 podStartE2EDuration="7.527274663s" podCreationTimestamp="2026-02-02 07:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:36:41.525630999 +0000 UTC m=+574.946834387" watchObservedRunningTime="2026-02-02 07:36:41.527274663 +0000 UTC m=+574.948478051" Feb 02 07:36:41 crc kubenswrapper[4730]: I0202 07:36:41.534382 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:36:47 crc kubenswrapper[4730]: I0202 07:36:47.257878 4730 scope.go:117] "RemoveContainer" containerID="2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74" Feb 02 07:36:47 crc kubenswrapper[4730]: E0202 07:36:47.258950 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zp8tp_openshift-multus(00b75ed7-302d-4f21-9c20-6ecab241b7b4)\"" pod="openshift-multus/multus-zp8tp" podUID="00b75ed7-302d-4f21-9c20-6ecab241b7b4" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.512008 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.513978 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.517475 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.517592 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-p6hk4" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.517949 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.629794 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kwc\" (UniqueName: \"kubernetes.io/projected/0797f61e-5af0-4070-ab85-3026f244c2f3-kube-api-access-j5kwc\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.630133 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-data\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.630337 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-log\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.630569 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-run\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.732073 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-data\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.732667 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-log\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.732929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-run\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.733287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5kwc\" (UniqueName: \"kubernetes.io/projected/0797f61e-5af0-4070-ab85-3026f244c2f3-kube-api-access-j5kwc\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.733075 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-log\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.733688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-run\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.732732 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0797f61e-5af0-4070-ab85-3026f244c2f3-data\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.775992 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5kwc\" (UniqueName: \"kubernetes.io/projected/0797f61e-5af0-4070-ab85-3026f244c2f3-kube-api-access-j5kwc\") pod \"ceph\" (UID: \"0797f61e-5af0-4070-ab85-3026f244c2f3\") " pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: I0202 07:36:54.836219 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 02 07:36:54 crc kubenswrapper[4730]: W0202 07:36:54.878726 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0797f61e_5af0_4070_ab85_3026f244c2f3.slice/crio-f44efa6418a93c88d1a5e844b2fc975884ad0f031ad145cb6c453f1d946545f9 WatchSource:0}: Error finding container f44efa6418a93c88d1a5e844b2fc975884ad0f031ad145cb6c453f1d946545f9: Status 404 returned error can't find the container with id f44efa6418a93c88d1a5e844b2fc975884ad0f031ad145cb6c453f1d946545f9 Feb 02 07:36:54 crc kubenswrapper[4730]: E0202 07:36:54.884794 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:54 crc kubenswrapper[4730]: E0202 07:36:54.909086 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:55 crc kubenswrapper[4730]: I0202 07:36:55.593623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"0797f61e-5af0-4070-ab85-3026f244c2f3","Type":"ContainerStarted","Data":"f44efa6418a93c88d1a5e844b2fc975884ad0f031ad145cb6c453f1d946545f9"} Feb 02 07:36:56 crc kubenswrapper[4730]: E0202 07:36:56.100927 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:56 crc kubenswrapper[4730]: E0202 07:36:56.117656 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:57 crc kubenswrapper[4730]: E0202 07:36:57.274547 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:57 crc kubenswrapper[4730]: E0202 07:36:57.290228 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:57 crc kubenswrapper[4730]: I0202 07:36:57.660396 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:36:57 crc kubenswrapper[4730]: I0202 07:36:57.660452 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:36:57 crc kubenswrapper[4730]: I0202 07:36:57.660493 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:36:57 crc kubenswrapper[4730]: I0202 07:36:57.661061 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:36:57 crc kubenswrapper[4730]: I0202 07:36:57.661115 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd" gracePeriod=600 Feb 02 07:36:58 crc kubenswrapper[4730]: E0202 07:36:58.510339 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:58 crc kubenswrapper[4730]: E0202 07:36:58.529669 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:58 crc kubenswrapper[4730]: I0202 07:36:58.618534 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd" exitCode=0 Feb 02 07:36:58 crc kubenswrapper[4730]: I0202 07:36:58.618609 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd"} Feb 02 07:36:58 crc kubenswrapper[4730]: I0202 07:36:58.618840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac"} Feb 02 07:36:58 crc kubenswrapper[4730]: I0202 07:36:58.618860 4730 scope.go:117] "RemoveContainer" containerID="c8f3d89438b2c90a3df4d2c24ead952c1532c846097a23f6bde4650baadb23c4" Feb 02 07:36:59 crc kubenswrapper[4730]: E0202 07:36:59.747974 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:36:59 crc kubenswrapper[4730]: E0202 07:36:59.761828 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:00 crc kubenswrapper[4730]: I0202 07:37:00.254593 4730 scope.go:117] "RemoveContainer" containerID="2a57047b25d7c894ae9847587e840769d7dfb9315cfd38751fa9926475985a74" Feb 02 07:37:00 crc kubenswrapper[4730]: I0202 07:37:00.641343 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/2.log" Feb 02 07:37:00 crc kubenswrapper[4730]: I0202 07:37:00.641921 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/1.log" Feb 02 07:37:00 crc kubenswrapper[4730]: I0202 07:37:00.641971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zp8tp" event={"ID":"00b75ed7-302d-4f21-9c20-6ecab241b7b4","Type":"ContainerStarted","Data":"8228b836f6d57d40f6ffa57b4b252676009a17f34165773c2544d82bbede2490"} Feb 02 07:37:00 crc kubenswrapper[4730]: E0202 07:37:00.963056 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:00 crc kubenswrapper[4730]: E0202 07:37:00.976785 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:02 crc kubenswrapper[4730]: E0202 07:37:02.217177 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:02 crc kubenswrapper[4730]: E0202 07:37:02.230241 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:03 crc kubenswrapper[4730]: E0202 07:37:03.436534 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:03 crc kubenswrapper[4730]: E0202 07:37:03.457643 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:04 crc kubenswrapper[4730]: I0202 07:37:04.561814 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6kkbj" Feb 02 07:37:04 crc kubenswrapper[4730]: E0202 07:37:04.690436 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:04 crc kubenswrapper[4730]: E0202 07:37:04.706090 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:05 crc kubenswrapper[4730]: E0202 07:37:05.913375 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:05 crc kubenswrapper[4730]: E0202 07:37:05.928290 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:07 crc kubenswrapper[4730]: E0202 07:37:07.130233 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:07 crc kubenswrapper[4730]: E0202 07:37:07.157295 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:07 crc kubenswrapper[4730]: I0202 07:37:07.561733 4730 scope.go:117] "RemoveContainer" containerID="b3548735f4d96f7c64d4bfb7968799a771f825aab47231f2264fe8775eafe965" Feb 02 07:37:08 crc kubenswrapper[4730]: E0202 07:37:08.329769 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:08 crc kubenswrapper[4730]: E0202 07:37:08.344590 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:09 crc kubenswrapper[4730]: E0202 07:37:09.526397 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:09 crc kubenswrapper[4730]: E0202 07:37:09.542329 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:10 crc kubenswrapper[4730]: E0202 07:37:10.737941 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:10 crc kubenswrapper[4730]: E0202 07:37:10.751499 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:11 crc kubenswrapper[4730]: E0202 07:37:11.978398 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:12 crc kubenswrapper[4730]: E0202 07:37:12.000351 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:13 crc kubenswrapper[4730]: E0202 07:37:13.143745 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:13 crc kubenswrapper[4730]: E0202 07:37:13.163683 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:14 crc kubenswrapper[4730]: E0202 07:37:14.322772 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:14 crc kubenswrapper[4730]: E0202 07:37:14.335440 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:15 crc kubenswrapper[4730]: I0202 07:37:15.045021 4730 scope.go:117] "RemoveContainer" containerID="f17e38821be73a57584c9414bcf297c3bb3618499e8076e7bba7aeef734c4e6d" Feb 02 07:37:15 crc kubenswrapper[4730]: I0202 07:37:15.436718 4730 scope.go:117] "RemoveContainer" containerID="2aa1f90a569e227d6a731163360597e376e15f2f323c867a0569d9404dd25a57" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.543320 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.558182 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.579802 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.579984 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5kwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(0797f61e-5af0-4070-ab85-3026f244c2f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.581248 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="0797f61e-5af0-4070-ab85-3026f244c2f3" Feb 02 07:37:15 crc kubenswrapper[4730]: I0202 07:37:15.739954 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zp8tp_00b75ed7-302d-4f21-9c20-6ecab241b7b4/kube-multus/2.log" Feb 02 07:37:15 crc kubenswrapper[4730]: E0202 07:37:15.742774 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="0797f61e-5af0-4070-ab85-3026f244c2f3" Feb 02 07:37:16 crc kubenswrapper[4730]: E0202 07:37:16.785016 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:16 crc kubenswrapper[4730]: E0202 07:37:16.803505 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:17 crc kubenswrapper[4730]: E0202 07:37:17.981475 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:17 crc kubenswrapper[4730]: E0202 07:37:17.997453 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:19 crc kubenswrapper[4730]: E0202 07:37:19.195753 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:19 crc kubenswrapper[4730]: E0202 07:37:19.218760 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:20 crc kubenswrapper[4730]: E0202 07:37:20.439322 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:20 crc kubenswrapper[4730]: E0202 07:37:20.461029 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:21 crc kubenswrapper[4730]: E0202 07:37:21.673729 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:21 crc kubenswrapper[4730]: E0202 07:37:21.694829 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:22 crc kubenswrapper[4730]: E0202 07:37:22.899030 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:22 crc kubenswrapper[4730]: E0202 07:37:22.921126 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:24 crc kubenswrapper[4730]: E0202 07:37:24.126147 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:24 crc kubenswrapper[4730]: E0202 07:37:24.149337 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:25 crc kubenswrapper[4730]: E0202 07:37:25.342973 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:25 crc kubenswrapper[4730]: E0202 07:37:25.363111 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:26 crc kubenswrapper[4730]: E0202 07:37:26.557443 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:26 crc kubenswrapper[4730]: E0202 07:37:26.579478 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:27 crc kubenswrapper[4730]: E0202 07:37:27.779772 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:27 crc kubenswrapper[4730]: E0202 07:37:27.802401 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:28 crc kubenswrapper[4730]: E0202 07:37:28.986322 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:29 crc kubenswrapper[4730]: E0202 07:37:29.009070 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:30 crc kubenswrapper[4730]: E0202 07:37:30.216910 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:30 crc kubenswrapper[4730]: E0202 07:37:30.240534 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:30 crc kubenswrapper[4730]: I0202 07:37:30.868417 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"0797f61e-5af0-4070-ab85-3026f244c2f3","Type":"ContainerStarted","Data":"0ec72cee8f1f2988a44d4cf2d1b4c7e4a15a388cf5b39dde32ee24f8b94b8fc1"} Feb 02 07:37:30 crc kubenswrapper[4730]: I0202 07:37:30.887815 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=2.075441129 podStartE2EDuration="36.887796613s" podCreationTimestamp="2026-02-02 07:36:54 +0000 UTC" firstStartedPulling="2026-02-02 07:36:54.882760935 +0000 UTC m=+588.303964323" lastFinishedPulling="2026-02-02 07:37:29.695116419 +0000 UTC m=+623.116319807" observedRunningTime="2026-02-02 07:37:30.8846868 +0000 UTC m=+624.305890148" watchObservedRunningTime="2026-02-02 07:37:30.887796613 +0000 UTC m=+624.308999961" Feb 02 07:37:31 crc kubenswrapper[4730]: E0202 07:37:31.435297 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:31 crc kubenswrapper[4730]: E0202 07:37:31.457337 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:32 crc kubenswrapper[4730]: E0202 07:37:32.691857 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:32 crc kubenswrapper[4730]: E0202 07:37:32.716212 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:33 crc kubenswrapper[4730]: E0202 07:37:33.932424 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:33 crc kubenswrapper[4730]: E0202 07:37:33.954840 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:35 crc kubenswrapper[4730]: E0202 07:37:35.148886 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:35 crc kubenswrapper[4730]: E0202 07:37:35.160771 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:36 crc kubenswrapper[4730]: E0202 07:37:36.365092 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:36 crc kubenswrapper[4730]: E0202 07:37:36.385986 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:37 crc kubenswrapper[4730]: E0202 07:37:37.612877 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:37 crc kubenswrapper[4730]: E0202 07:37:37.640705 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:38 crc kubenswrapper[4730]: E0202 07:37:38.878915 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:38 crc kubenswrapper[4730]: E0202 07:37:38.901823 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:40 crc kubenswrapper[4730]: E0202 07:37:40.127928 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:40 crc kubenswrapper[4730]: E0202 07:37:40.148240 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:41 crc kubenswrapper[4730]: E0202 07:37:41.380421 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:41 crc kubenswrapper[4730]: E0202 07:37:41.401474 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:42 crc kubenswrapper[4730]: E0202 07:37:42.602379 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:42 crc kubenswrapper[4730]: E0202 07:37:42.625106 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:43 crc kubenswrapper[4730]: E0202 07:37:43.803895 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:43 crc kubenswrapper[4730]: E0202 07:37:43.818107 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:45 crc kubenswrapper[4730]: E0202 07:37:45.026815 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:45 crc kubenswrapper[4730]: E0202 07:37:45.045835 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:46 crc kubenswrapper[4730]: E0202 07:37:46.253988 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:46 crc kubenswrapper[4730]: E0202 07:37:46.270279 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:47 crc kubenswrapper[4730]: E0202 07:37:47.497638 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:47 crc kubenswrapper[4730]: E0202 07:37:47.522913 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:48 crc kubenswrapper[4730]: E0202 07:37:48.739524 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:48 crc kubenswrapper[4730]: E0202 07:37:48.761144 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:49 crc kubenswrapper[4730]: E0202 07:37:49.906598 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:49 crc kubenswrapper[4730]: E0202 07:37:49.926552 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:51 crc kubenswrapper[4730]: E0202 07:37:51.146218 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:51 crc kubenswrapper[4730]: E0202 07:37:51.168406 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:52 crc kubenswrapper[4730]: E0202 07:37:52.397197 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:52 crc kubenswrapper[4730]: E0202 07:37:52.410811 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:53 crc kubenswrapper[4730]: E0202 07:37:53.614233 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:53 crc kubenswrapper[4730]: E0202 07:37:53.633455 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:54 crc kubenswrapper[4730]: E0202 07:37:54.808925 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:54 crc kubenswrapper[4730]: E0202 07:37:54.831824 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:56 crc kubenswrapper[4730]: E0202 07:37:56.054235 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:56 crc kubenswrapper[4730]: E0202 07:37:56.075413 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:57 crc kubenswrapper[4730]: E0202 07:37:57.290999 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:57 crc kubenswrapper[4730]: E0202 07:37:57.311363 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:58 crc kubenswrapper[4730]: E0202 07:37:58.511002 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:58 crc kubenswrapper[4730]: E0202 07:37:58.526097 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:59 crc kubenswrapper[4730]: E0202 07:37:59.745719 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:37:59 crc kubenswrapper[4730]: E0202 07:37:59.768681 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:00 crc kubenswrapper[4730]: E0202 07:38:00.976805 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:00 crc kubenswrapper[4730]: E0202 07:38:00.992594 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:02 crc kubenswrapper[4730]: E0202 07:38:02.212269 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:02 crc kubenswrapper[4730]: E0202 07:38:02.230011 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:03 crc kubenswrapper[4730]: E0202 07:38:03.404273 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:03 crc kubenswrapper[4730]: E0202 07:38:03.419352 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:04 crc kubenswrapper[4730]: E0202 07:38:04.632486 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:04 crc kubenswrapper[4730]: E0202 07:38:04.655469 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:05 crc kubenswrapper[4730]: E0202 07:38:05.900574 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:05 crc kubenswrapper[4730]: E0202 07:38:05.923787 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:07 crc kubenswrapper[4730]: E0202 07:38:07.118417 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:07 crc kubenswrapper[4730]: E0202 07:38:07.140853 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:08 crc kubenswrapper[4730]: E0202 07:38:08.338526 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:08 crc kubenswrapper[4730]: E0202 07:38:08.361036 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:09 crc kubenswrapper[4730]: E0202 07:38:09.533468 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:09 crc kubenswrapper[4730]: E0202 07:38:09.553598 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:10 crc kubenswrapper[4730]: E0202 07:38:10.790706 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:10 crc kubenswrapper[4730]: E0202 07:38:10.811821 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:11 crc kubenswrapper[4730]: E0202 07:38:11.984124 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:12 crc kubenswrapper[4730]: E0202 07:38:12.002779 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:13 crc kubenswrapper[4730]: E0202 07:38:13.220015 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:13 crc kubenswrapper[4730]: E0202 07:38:13.243053 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:14 crc kubenswrapper[4730]: E0202 07:38:14.443839 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:14 crc kubenswrapper[4730]: E0202 07:38:14.464716 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:15 crc kubenswrapper[4730]: E0202 07:38:15.670512 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:15 crc kubenswrapper[4730]: E0202 07:38:15.684253 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:16 crc kubenswrapper[4730]: E0202 07:38:16.880440 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:16 crc kubenswrapper[4730]: E0202 07:38:16.902196 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:18 crc kubenswrapper[4730]: E0202 07:38:18.106817 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:18 crc kubenswrapper[4730]: E0202 07:38:18.128872 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:19 crc kubenswrapper[4730]: E0202 07:38:19.345788 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:19 crc kubenswrapper[4730]: E0202 07:38:19.369925 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:20 crc kubenswrapper[4730]: E0202 07:38:20.586287 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:20 crc kubenswrapper[4730]: E0202 07:38:20.602303 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:21 crc kubenswrapper[4730]: E0202 07:38:21.819918 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:21 crc kubenswrapper[4730]: E0202 07:38:21.842845 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:23 crc kubenswrapper[4730]: E0202 07:38:23.057644 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:23 crc kubenswrapper[4730]: E0202 07:38:23.077121 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:24 crc kubenswrapper[4730]: E0202 07:38:24.261889 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:24 crc kubenswrapper[4730]: E0202 07:38:24.282730 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:25 crc kubenswrapper[4730]: E0202 07:38:25.516610 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:25 crc kubenswrapper[4730]: E0202 07:38:25.536031 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:26 crc kubenswrapper[4730]: E0202 07:38:26.751465 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:26 crc kubenswrapper[4730]: E0202 07:38:26.770903 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:27 crc kubenswrapper[4730]: E0202 07:38:27.973124 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:27 crc kubenswrapper[4730]: E0202 07:38:27.994473 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:29 crc kubenswrapper[4730]: E0202 07:38:29.180279 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:29 crc kubenswrapper[4730]: E0202 07:38:29.202582 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:30 crc kubenswrapper[4730]: E0202 07:38:30.391381 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:30 crc kubenswrapper[4730]: E0202 07:38:30.412385 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:31 crc kubenswrapper[4730]: E0202 07:38:31.603213 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:31 crc kubenswrapper[4730]: E0202 07:38:31.619829 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:32 crc kubenswrapper[4730]: E0202 07:38:32.828917 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:32 crc kubenswrapper[4730]: E0202 07:38:32.847260 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:34 crc kubenswrapper[4730]: E0202 07:38:34.056091 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:34 crc kubenswrapper[4730]: E0202 07:38:34.081942 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:35 crc kubenswrapper[4730]: E0202 07:38:35.303745 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:35 crc kubenswrapper[4730]: E0202 07:38:35.328697 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:36 crc kubenswrapper[4730]: E0202 07:38:36.550133 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:36 crc kubenswrapper[4730]: E0202 07:38:36.571409 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:37 crc kubenswrapper[4730]: E0202 07:38:37.738857 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:37 crc kubenswrapper[4730]: E0202 07:38:37.755753 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:38 crc kubenswrapper[4730]: E0202 07:38:38.959422 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:38 crc kubenswrapper[4730]: E0202 07:38:38.979547 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:40 crc kubenswrapper[4730]: E0202 07:38:40.183272 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:40 crc kubenswrapper[4730]: E0202 07:38:40.204487 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:41 crc kubenswrapper[4730]: E0202 07:38:41.411668 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:41 crc kubenswrapper[4730]: E0202 07:38:41.426893 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:42 crc kubenswrapper[4730]: E0202 07:38:42.628716 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:42 crc kubenswrapper[4730]: E0202 07:38:42.647278 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:43 crc kubenswrapper[4730]: E0202 07:38:43.857950 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:43 crc kubenswrapper[4730]: E0202 07:38:43.879604 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:45 crc kubenswrapper[4730]: E0202 07:38:45.105423 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:45 crc kubenswrapper[4730]: E0202 07:38:45.127739 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:46 crc kubenswrapper[4730]: E0202 07:38:46.314454 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:46 crc kubenswrapper[4730]: E0202 07:38:46.334038 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:47 crc kubenswrapper[4730]: E0202 07:38:47.545819 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:47 crc kubenswrapper[4730]: E0202 07:38:47.561372 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:48 crc kubenswrapper[4730]: E0202 07:38:48.799482 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:48 crc kubenswrapper[4730]: E0202 07:38:48.821255 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:49 crc kubenswrapper[4730]: E0202 07:38:49.988258 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:50 crc kubenswrapper[4730]: E0202 07:38:50.008926 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:51 crc kubenswrapper[4730]: E0202 07:38:51.213516 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:51 crc kubenswrapper[4730]: E0202 07:38:51.232777 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:52 crc kubenswrapper[4730]: E0202 07:38:52.409208 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:52 crc kubenswrapper[4730]: E0202 07:38:52.428417 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:53 crc kubenswrapper[4730]: E0202 07:38:53.656892 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:53 crc kubenswrapper[4730]: E0202 07:38:53.676294 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:54 crc kubenswrapper[4730]: E0202 07:38:54.818738 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:54 crc kubenswrapper[4730]: E0202 07:38:54.832689 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:56 crc kubenswrapper[4730]: E0202 07:38:56.026742 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:56 crc kubenswrapper[4730]: E0202 07:38:56.052945 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:57 crc kubenswrapper[4730]: E0202 07:38:57.287995 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:57 crc kubenswrapper[4730]: E0202 07:38:57.308996 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:58 crc kubenswrapper[4730]: E0202 07:38:58.517083 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:58 crc kubenswrapper[4730]: E0202 07:38:58.537422 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:59 crc kubenswrapper[4730]: E0202 07:38:59.756341 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:38:59 crc kubenswrapper[4730]: E0202 07:38:59.774631 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:01 crc kubenswrapper[4730]: E0202 07:39:01.002256 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:01 crc kubenswrapper[4730]: E0202 07:39:01.022984 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:02 crc kubenswrapper[4730]: E0202 07:39:02.230071 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:02 crc kubenswrapper[4730]: E0202 07:39:02.251487 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:03 crc kubenswrapper[4730]: E0202 07:39:03.450631 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:03 crc kubenswrapper[4730]: E0202 07:39:03.470611 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:04 crc kubenswrapper[4730]: E0202 07:39:04.684002 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:04 crc kubenswrapper[4730]: E0202 07:39:04.705480 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:05 crc kubenswrapper[4730]: E0202 07:39:05.917254 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:05 crc kubenswrapper[4730]: E0202 07:39:05.938422 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:07 crc kubenswrapper[4730]: E0202 07:39:07.166259 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:07 crc kubenswrapper[4730]: E0202 07:39:07.188017 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:08 crc kubenswrapper[4730]: E0202 07:39:08.406259 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:08 crc kubenswrapper[4730]: E0202 07:39:08.429722 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:09 crc kubenswrapper[4730]: E0202 07:39:09.584013 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:09 crc kubenswrapper[4730]: E0202 07:39:09.604917 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:10 crc kubenswrapper[4730]: E0202 07:39:10.815062 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:10 crc kubenswrapper[4730]: E0202 07:39:10.837017 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:12 crc kubenswrapper[4730]: E0202 07:39:12.045987 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:12 crc kubenswrapper[4730]: E0202 07:39:12.066141 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:13 crc kubenswrapper[4730]: E0202 07:39:13.267093 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:13 crc kubenswrapper[4730]: E0202 07:39:13.294628 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:14 crc kubenswrapper[4730]: E0202 07:39:14.456925 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:14 crc kubenswrapper[4730]: E0202 07:39:14.473616 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:15 crc kubenswrapper[4730]: E0202 07:39:15.651819 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:15 crc kubenswrapper[4730]: E0202 07:39:15.674900 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:16 crc kubenswrapper[4730]: E0202 07:39:16.848413 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:16 crc kubenswrapper[4730]: E0202 07:39:16.872936 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:18 crc kubenswrapper[4730]: E0202 07:39:18.097623 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:18 crc kubenswrapper[4730]: E0202 07:39:18.117049 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:19 crc kubenswrapper[4730]: E0202 07:39:19.283942 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:19 crc kubenswrapper[4730]: E0202 07:39:19.305994 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:20 crc kubenswrapper[4730]: E0202 07:39:20.520585 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:20 crc kubenswrapper[4730]: E0202 07:39:20.542950 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=8359549100515067087, SKID=, AKID=28:AF:AC:AE:22:B7:69:E6:BA:D4:BD:D2:C6:6B:C3:DF:3E:7B:D9:B3 failed: x509: certificate signed by unknown authority" Feb 02 07:39:27 crc kubenswrapper[4730]: I0202 07:39:27.660687 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:39:27 crc kubenswrapper[4730]: I0202 07:39:27.661100 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:39:41 crc kubenswrapper[4730]: I0202 07:39:41.430409 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 07:39:57 crc kubenswrapper[4730]: I0202 07:39:57.661046 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:39:57 crc kubenswrapper[4730]: I0202 07:39:57.661897 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.675127 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v7bb6/must-gather-qw75m"] Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.676511 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.679810 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v7bb6"/"default-dockercfg-zdxwf" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.679807 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v7bb6"/"openshift-service-ca.crt" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.679992 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v7bb6"/"kube-root-ca.crt" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.684180 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v7bb6/must-gather-qw75m"] Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.723617 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77cw\" (UniqueName: \"kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.723727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.824767 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n77cw\" (UniqueName: \"kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.824895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.825475 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:02 crc kubenswrapper[4730]: I0202 07:40:02.850427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77cw\" (UniqueName: \"kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw\") pod \"must-gather-qw75m\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:03 crc kubenswrapper[4730]: I0202 07:40:03.001437 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:40:03 crc kubenswrapper[4730]: I0202 07:40:03.274015 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v7bb6/must-gather-qw75m"] Feb 02 07:40:03 crc kubenswrapper[4730]: I0202 07:40:03.936367 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7bb6/must-gather-qw75m" event={"ID":"814e361d-adf2-4eab-8544-3cb2d2b3e885","Type":"ContainerStarted","Data":"19f67ee7b43ccf9960d43103d9802f4a3b0c3dd90db3e3e7eacefaaf4f9a6ded"} Feb 02 07:40:09 crc kubenswrapper[4730]: I0202 07:40:09.985941 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7bb6/must-gather-qw75m" event={"ID":"814e361d-adf2-4eab-8544-3cb2d2b3e885","Type":"ContainerStarted","Data":"1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140"} Feb 02 07:40:09 crc kubenswrapper[4730]: I0202 07:40:09.986615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7bb6/must-gather-qw75m" event={"ID":"814e361d-adf2-4eab-8544-3cb2d2b3e885","Type":"ContainerStarted","Data":"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9"} Feb 02 07:40:10 crc kubenswrapper[4730]: I0202 07:40:10.011118 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v7bb6/must-gather-qw75m" podStartSLOduration=2.085881229 podStartE2EDuration="8.011094482s" podCreationTimestamp="2026-02-02 07:40:02 +0000 UTC" firstStartedPulling="2026-02-02 07:40:03.289918172 +0000 UTC m=+776.711121520" lastFinishedPulling="2026-02-02 07:40:09.215131415 +0000 UTC m=+782.636334773" observedRunningTime="2026-02-02 07:40:10.004038313 +0000 UTC m=+783.425241721" watchObservedRunningTime="2026-02-02 07:40:10.011094482 +0000 UTC m=+783.432297860" Feb 02 07:40:25 crc kubenswrapper[4730]: I0202 07:40:25.509360 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_0797f61e-5af0-4070-ab85-3026f244c2f3/ceph/0.log" Feb 02 07:40:27 crc kubenswrapper[4730]: I0202 07:40:27.660367 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:40:27 crc kubenswrapper[4730]: I0202 07:40:27.662034 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:40:27 crc kubenswrapper[4730]: I0202 07:40:27.662297 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:40:27 crc kubenswrapper[4730]: I0202 07:40:27.663224 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:40:27 crc kubenswrapper[4730]: I0202 07:40:27.663469 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac" gracePeriod=600 Feb 02 07:40:27 crc kubenswrapper[4730]: E0202 07:40:27.724562 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cde55f_e8c2_493e_82b6_a3b4a839366b.slice/crio-conmon-55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:40:28 crc kubenswrapper[4730]: I0202 07:40:28.119676 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac" exitCode=0 Feb 02 07:40:28 crc kubenswrapper[4730]: I0202 07:40:28.119718 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac"} Feb 02 07:40:28 crc kubenswrapper[4730]: I0202 07:40:28.120202 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55"} Feb 02 07:40:28 crc kubenswrapper[4730]: I0202 07:40:28.120281 4730 scope.go:117] "RemoveContainer" containerID="8849749fe3e9f64250963ade15077dd456c8db563b57325c073662609fdb45bd" Feb 02 07:40:51 crc kubenswrapper[4730]: I0202 07:40:51.735723 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zjbtc_f7dc234b-4559-460c-a4fe-85cedc72c368/control-plane-machine-set-operator/0.log" Feb 02 07:40:51 crc kubenswrapper[4730]: I0202 07:40:51.832276 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tmxn6_7eca2f38-c23b-4874-b4a1-b57bafd24604/kube-rbac-proxy/0.log" Feb 02 07:40:51 crc kubenswrapper[4730]: I0202 07:40:51.856682 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tmxn6_7eca2f38-c23b-4874-b4a1-b57bafd24604/machine-api-operator/0.log" Feb 02 07:41:04 crc kubenswrapper[4730]: I0202 07:41:04.674097 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qsm4t_224bf167-43f9-4c9c-8b93-f607669dd5a5/cert-manager-controller/0.log" Feb 02 07:41:04 crc kubenswrapper[4730]: I0202 07:41:04.855501 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vmn4d_b35b55c2-6ef3-42f1-8d0f-5a878b8edfe9/cert-manager-cainjector/0.log" Feb 02 07:41:04 crc kubenswrapper[4730]: I0202 07:41:04.889596 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-6k6fw_ab38a1d0-fdbd-4dcb-b02c-a56b0c851b78/cert-manager-webhook/0.log" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.105142 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.108342 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.128036 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.187828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.187892 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nff\" (UniqueName: \"kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.187982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.288647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.289031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nff\" (UniqueName: \"kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.289081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.289285 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.289452 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.315761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nff\" (UniqueName: \"kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff\") pod \"community-operators-7dfjw\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.466745 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:27 crc kubenswrapper[4730]: I0202 07:41:27.722527 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:28 crc kubenswrapper[4730]: I0202 07:41:28.509637 4730 generic.go:334] "Generic (PLEG): container finished" podID="054886ef-7bb6-4178-bb92-271427b7c57a" containerID="61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8" exitCode=0 Feb 02 07:41:28 crc kubenswrapper[4730]: I0202 07:41:28.509734 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerDied","Data":"61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8"} Feb 02 07:41:28 crc kubenswrapper[4730]: I0202 07:41:28.510021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerStarted","Data":"ffdd2c19b9bda1211a7f97f2ded5c875b437076211144c19214d977c012e9e2d"} Feb 02 07:41:28 crc kubenswrapper[4730]: I0202 07:41:28.511912 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:41:29 crc kubenswrapper[4730]: I0202 07:41:29.517577 4730 generic.go:334] "Generic (PLEG): container finished" podID="054886ef-7bb6-4178-bb92-271427b7c57a" containerID="ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05" exitCode=0 Feb 02 07:41:29 crc kubenswrapper[4730]: I0202 07:41:29.517639 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerDied","Data":"ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05"} Feb 02 07:41:30 crc kubenswrapper[4730]: I0202 07:41:30.525770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerStarted","Data":"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593"} Feb 02 07:41:30 crc kubenswrapper[4730]: I0202 07:41:30.560153 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dfjw" podStartSLOduration=2.15715829 podStartE2EDuration="3.560133493s" podCreationTimestamp="2026-02-02 07:41:27 +0000 UTC" firstStartedPulling="2026-02-02 07:41:28.511523493 +0000 UTC m=+861.932726881" lastFinishedPulling="2026-02-02 07:41:29.914498736 +0000 UTC m=+863.335702084" observedRunningTime="2026-02-02 07:41:30.555238603 +0000 UTC m=+863.976441961" watchObservedRunningTime="2026-02-02 07:41:30.560133493 +0000 UTC m=+863.981336851" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.484806 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8st5"] Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.485749 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.515146 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8st5"] Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.648105 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-utilities\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.648145 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-catalog-content\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.648218 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjtx\" (UniqueName: \"kubernetes.io/projected/b1d11d0d-61b6-4145-b6b9-b6f914f58601-kube-api-access-xjjtx\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.749295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-utilities\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.749339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-catalog-content\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.749368 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjtx\" (UniqueName: \"kubernetes.io/projected/b1d11d0d-61b6-4145-b6b9-b6f914f58601-kube-api-access-xjjtx\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.749759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-utilities\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.749953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d11d0d-61b6-4145-b6b9-b6f914f58601-catalog-content\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.767362 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjtx\" (UniqueName: \"kubernetes.io/projected/b1d11d0d-61b6-4145-b6b9-b6f914f58601-kube-api-access-xjjtx\") pod \"certified-operators-z8st5\" (UID: \"b1d11d0d-61b6-4145-b6b9-b6f914f58601\") " pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:31 crc kubenswrapper[4730]: I0202 07:41:31.805622 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:32 crc kubenswrapper[4730]: I0202 07:41:32.235256 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8st5"] Feb 02 07:41:32 crc kubenswrapper[4730]: W0202 07:41:32.244484 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d11d0d_61b6_4145_b6b9_b6f914f58601.slice/crio-a16d6e289a2bff118c35742ba73b09ee1dad1ef40294dedc1c864d0038f6b308 WatchSource:0}: Error finding container a16d6e289a2bff118c35742ba73b09ee1dad1ef40294dedc1c864d0038f6b308: Status 404 returned error can't find the container with id a16d6e289a2bff118c35742ba73b09ee1dad1ef40294dedc1c864d0038f6b308 Feb 02 07:41:32 crc kubenswrapper[4730]: I0202 07:41:32.539375 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1d11d0d-61b6-4145-b6b9-b6f914f58601" containerID="9e25924c7ed34f0b6d63dc3b16e1d55db01813bc0795237171cb1c81ad599775" exitCode=0 Feb 02 07:41:32 crc kubenswrapper[4730]: I0202 07:41:32.539434 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8st5" event={"ID":"b1d11d0d-61b6-4145-b6b9-b6f914f58601","Type":"ContainerDied","Data":"9e25924c7ed34f0b6d63dc3b16e1d55db01813bc0795237171cb1c81ad599775"} Feb 02 07:41:32 crc kubenswrapper[4730]: I0202 07:41:32.539471 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8st5" event={"ID":"b1d11d0d-61b6-4145-b6b9-b6f914f58601","Type":"ContainerStarted","Data":"a16d6e289a2bff118c35742ba73b09ee1dad1ef40294dedc1c864d0038f6b308"} Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.112146 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-utilities/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.271400 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-content/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.305095 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-utilities/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.307779 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-content/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.443722 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-utilities/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.446662 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/extract-content/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.602496 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8st5_b1d11d0d-61b6-4145-b6b9-b6f914f58601/extract-utilities/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.643114 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mdtqb_4f71e670-eb96-4321-af75-8ef24727cb13/registry-server/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.807830 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8st5_b1d11d0d-61b6-4145-b6b9-b6f914f58601/extract-utilities/0.log" Feb 02 07:41:34 crc kubenswrapper[4730]: I0202 07:41:34.993916 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8st5_b1d11d0d-61b6-4145-b6b9-b6f914f58601/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.146149 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.273844 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.290691 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.291504 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.425289 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.474208 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/registry-server/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.488362 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dfjw_054886ef-7bb6-4178-bb92-271427b7c57a/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.592392 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.754114 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.757125 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-utilities/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.777227 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.930415 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-content/0.log" Feb 02 07:41:35 crc kubenswrapper[4730]: I0202 07:41:35.943820 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.036536 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qkzng_b3084604-22f9-41f3-8f77-f2d0d0bee504/registry-server/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.120379 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gsgln_0f381f8d-949e-41ee-a20e-31189f5630f1/marketplace-operator/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.120489 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.280261 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.288315 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.291702 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.315591 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.355293 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-content/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.371234 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-content/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.406765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.406810 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.406830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkstn\" (UniqueName: \"kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.508320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.508392 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.508608 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkstn\" (UniqueName: \"kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.508952 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.509082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.525594 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkstn\" (UniqueName: \"kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn\") pod \"redhat-marketplace-nz6hs\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.541309 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.567995 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/extract-content/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.605348 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.606063 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnq6k_4f6f5114-b6e2-4843-8ff5-ac67569c1dbd/registry-server/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.741410 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.893080 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-utilities/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.932013 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-content/0.log" Feb 02 07:41:36 crc kubenswrapper[4730]: I0202 07:41:36.964512 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-content/0.log" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.104219 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-utilities/0.log" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.184995 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/extract-content/0.log" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.236343 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kznfh_ace3527b-5d93-430c-8ae2-89d447f31735/registry-server/0.log" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.467877 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.468272 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.513710 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:37 crc kubenswrapper[4730]: I0202 07:41:37.613077 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.126461 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:38 crc kubenswrapper[4730]: W0202 07:41:38.136313 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4a71982_b99f_48e8_b1ea_e94353608314.slice/crio-0be4fd2d4a5ba2f388432a75c574d52911698edde512e5b8f19052bf513af2be WatchSource:0}: Error finding container 0be4fd2d4a5ba2f388432a75c574d52911698edde512e5b8f19052bf513af2be: Status 404 returned error can't find the container with id 0be4fd2d4a5ba2f388432a75c574d52911698edde512e5b8f19052bf513af2be Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.590384 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1d11d0d-61b6-4145-b6b9-b6f914f58601" containerID="64f6248615959d3e3d51de3705f8a8336566b9fec16e5cefc2d8a2c7ba7dc45b" exitCode=0 Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.590517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8st5" event={"ID":"b1d11d0d-61b6-4145-b6b9-b6f914f58601","Type":"ContainerDied","Data":"64f6248615959d3e3d51de3705f8a8336566b9fec16e5cefc2d8a2c7ba7dc45b"} Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.592484 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4a71982-b99f-48e8-b1ea-e94353608314" containerID="c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f" exitCode=0 Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.592610 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerDied","Data":"c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f"} Feb 02 07:41:38 crc kubenswrapper[4730]: I0202 07:41:38.592663 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerStarted","Data":"0be4fd2d4a5ba2f388432a75c574d52911698edde512e5b8f19052bf513af2be"} Feb 02 07:41:39 crc kubenswrapper[4730]: I0202 07:41:39.603445 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8st5" event={"ID":"b1d11d0d-61b6-4145-b6b9-b6f914f58601","Type":"ContainerStarted","Data":"265584f7a4fc5deea19ebaf2b953cccc448b65682b756339f1d9777b85e3a72d"} Feb 02 07:41:39 crc kubenswrapper[4730]: I0202 07:41:39.656052 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8st5" podStartSLOduration=2.206190401 podStartE2EDuration="8.656029513s" podCreationTimestamp="2026-02-02 07:41:31 +0000 UTC" firstStartedPulling="2026-02-02 07:41:32.541407817 +0000 UTC m=+865.962611165" lastFinishedPulling="2026-02-02 07:41:38.991246899 +0000 UTC m=+872.412450277" observedRunningTime="2026-02-02 07:41:39.655720905 +0000 UTC m=+873.076924263" watchObservedRunningTime="2026-02-02 07:41:39.656029513 +0000 UTC m=+873.077232871" Feb 02 07:41:39 crc kubenswrapper[4730]: I0202 07:41:39.873790 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:40 crc kubenswrapper[4730]: I0202 07:41:40.612931 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4a71982-b99f-48e8-b1ea-e94353608314" containerID="3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad" exitCode=0 Feb 02 07:41:40 crc kubenswrapper[4730]: I0202 07:41:40.613034 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerDied","Data":"3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad"} Feb 02 07:41:40 crc kubenswrapper[4730]: I0202 07:41:40.613711 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dfjw" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="registry-server" containerID="cri-o://821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593" gracePeriod=2 Feb 02 07:41:40 crc kubenswrapper[4730]: I0202 07:41:40.938556 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.066819 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities\") pod \"054886ef-7bb6-4178-bb92-271427b7c57a\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.066904 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content\") pod \"054886ef-7bb6-4178-bb92-271427b7c57a\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.066968 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9nff\" (UniqueName: \"kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff\") pod \"054886ef-7bb6-4178-bb92-271427b7c57a\" (UID: \"054886ef-7bb6-4178-bb92-271427b7c57a\") " Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.067815 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities" (OuterVolumeSpecName: "utilities") pod "054886ef-7bb6-4178-bb92-271427b7c57a" (UID: "054886ef-7bb6-4178-bb92-271427b7c57a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.072715 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff" (OuterVolumeSpecName: "kube-api-access-z9nff") pod "054886ef-7bb6-4178-bb92-271427b7c57a" (UID: "054886ef-7bb6-4178-bb92-271427b7c57a"). InnerVolumeSpecName "kube-api-access-z9nff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.121496 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "054886ef-7bb6-4178-bb92-271427b7c57a" (UID: "054886ef-7bb6-4178-bb92-271427b7c57a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.168303 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.168340 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054886ef-7bb6-4178-bb92-271427b7c57a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.168352 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9nff\" (UniqueName: \"kubernetes.io/projected/054886ef-7bb6-4178-bb92-271427b7c57a-kube-api-access-z9nff\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.620389 4730 generic.go:334] "Generic (PLEG): container finished" podID="054886ef-7bb6-4178-bb92-271427b7c57a" containerID="821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593" exitCode=0 Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.620461 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerDied","Data":"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593"} Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.620492 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dfjw" event={"ID":"054886ef-7bb6-4178-bb92-271427b7c57a","Type":"ContainerDied","Data":"ffdd2c19b9bda1211a7f97f2ded5c875b437076211144c19214d977c012e9e2d"} Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.620514 4730 scope.go:117] "RemoveContainer" containerID="821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.620636 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dfjw" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.624894 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerStarted","Data":"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a"} Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.639298 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.639858 4730 scope.go:117] "RemoveContainer" containerID="ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.652822 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dfjw"] Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.663870 4730 scope.go:117] "RemoveContainer" containerID="61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.675430 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nz6hs" podStartSLOduration=4.267632009 podStartE2EDuration="5.675411739s" podCreationTimestamp="2026-02-02 07:41:36 +0000 UTC" firstStartedPulling="2026-02-02 07:41:39.60644704 +0000 UTC m=+873.027650428" lastFinishedPulling="2026-02-02 07:41:41.01422681 +0000 UTC m=+874.435430158" observedRunningTime="2026-02-02 07:41:41.66980906 +0000 UTC m=+875.091012418" watchObservedRunningTime="2026-02-02 07:41:41.675411739 +0000 UTC m=+875.096615087" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.682553 4730 scope.go:117] "RemoveContainer" containerID="821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593" Feb 02 07:41:41 crc kubenswrapper[4730]: E0202 07:41:41.683021 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593\": container with ID starting with 821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593 not found: ID does not exist" containerID="821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.683070 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593"} err="failed to get container status \"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593\": rpc error: code = NotFound desc = could not find container \"821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593\": container with ID starting with 821de1ce0d729e8fbd75f2d5f9f92c9a5b74311b9adab4179786f5f6a4f63593 not found: ID does not exist" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.683102 4730 scope.go:117] "RemoveContainer" containerID="ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05" Feb 02 07:41:41 crc kubenswrapper[4730]: E0202 07:41:41.683429 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05\": container with ID starting with ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05 not found: ID does not exist" containerID="ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.683464 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05"} err="failed to get container status \"ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05\": rpc error: code = NotFound desc = could not find container \"ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05\": container with ID starting with ced6df70c92a8159b4ad2f444a36e73d785831228eff4f8654539746d04b1f05 not found: ID does not exist" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.683489 4730 scope.go:117] "RemoveContainer" containerID="61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8" Feb 02 07:41:41 crc kubenswrapper[4730]: E0202 07:41:41.683956 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8\": container with ID starting with 61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8 not found: ID does not exist" containerID="61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.683984 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8"} err="failed to get container status \"61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8\": rpc error: code = NotFound desc = could not find container \"61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8\": container with ID starting with 61afbaf0eb049f7507e5adee38a4b216ac4eb3a180939250cd29019e737f79b8 not found: ID does not exist" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.806198 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.806501 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:41 crc kubenswrapper[4730]: I0202 07:41:41.857257 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:43 crc kubenswrapper[4730]: I0202 07:41:43.266059 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" path="/var/lib/kubelet/pods/054886ef-7bb6-4178-bb92-271427b7c57a/volumes" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.089539 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:41:46 crc kubenswrapper[4730]: E0202 07:41:46.090408 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="extract-content" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.090440 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="extract-content" Feb 02 07:41:46 crc kubenswrapper[4730]: E0202 07:41:46.090489 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="registry-server" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.090508 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="registry-server" Feb 02 07:41:46 crc kubenswrapper[4730]: E0202 07:41:46.090549 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="extract-utilities" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.090567 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="extract-utilities" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.090791 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="054886ef-7bb6-4178-bb92-271427b7c57a" containerName="registry-server" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.092756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.103588 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.232771 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8bzf\" (UniqueName: \"kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.232841 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.233024 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.335118 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8bzf\" (UniqueName: \"kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.335234 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.335306 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.338915 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.338966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.368275 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8bzf\" (UniqueName: \"kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf\") pod \"redhat-operators-n89wl\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.436454 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.606376 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.610418 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.661778 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:46 crc kubenswrapper[4730]: I0202 07:41:46.673494 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:41:47 crc kubenswrapper[4730]: I0202 07:41:47.663808 4730 generic.go:334] "Generic (PLEG): container finished" podID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerID="7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea" exitCode=0 Feb 02 07:41:47 crc kubenswrapper[4730]: I0202 07:41:47.663879 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerDied","Data":"7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea"} Feb 02 07:41:47 crc kubenswrapper[4730]: I0202 07:41:47.664902 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerStarted","Data":"81e6c7ee1d75d46a3f8f059ff3bb112a7ca51080c95d86996fd843d61ba9280b"} Feb 02 07:41:47 crc kubenswrapper[4730]: I0202 07:41:47.723198 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:49 crc kubenswrapper[4730]: I0202 07:41:49.074474 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:49 crc kubenswrapper[4730]: I0202 07:41:49.683264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerStarted","Data":"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b"} Feb 02 07:41:49 crc kubenswrapper[4730]: I0202 07:41:49.683337 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nz6hs" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="registry-server" containerID="cri-o://753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a" gracePeriod=2 Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.061216 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.187736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkstn\" (UniqueName: \"kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn\") pod \"d4a71982-b99f-48e8-b1ea-e94353608314\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.187790 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities\") pod \"d4a71982-b99f-48e8-b1ea-e94353608314\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.187829 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content\") pod \"d4a71982-b99f-48e8-b1ea-e94353608314\" (UID: \"d4a71982-b99f-48e8-b1ea-e94353608314\") " Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.188596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities" (OuterVolumeSpecName: "utilities") pod "d4a71982-b99f-48e8-b1ea-e94353608314" (UID: "d4a71982-b99f-48e8-b1ea-e94353608314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.197376 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn" (OuterVolumeSpecName: "kube-api-access-gkstn") pod "d4a71982-b99f-48e8-b1ea-e94353608314" (UID: "d4a71982-b99f-48e8-b1ea-e94353608314"). InnerVolumeSpecName "kube-api-access-gkstn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.210581 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4a71982-b99f-48e8-b1ea-e94353608314" (UID: "d4a71982-b99f-48e8-b1ea-e94353608314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.288750 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkstn\" (UniqueName: \"kubernetes.io/projected/d4a71982-b99f-48e8-b1ea-e94353608314-kube-api-access-gkstn\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.288792 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.288801 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a71982-b99f-48e8-b1ea-e94353608314-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.694120 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4a71982-b99f-48e8-b1ea-e94353608314" containerID="753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a" exitCode=0 Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.694222 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerDied","Data":"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a"} Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.694255 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz6hs" event={"ID":"d4a71982-b99f-48e8-b1ea-e94353608314","Type":"ContainerDied","Data":"0be4fd2d4a5ba2f388432a75c574d52911698edde512e5b8f19052bf513af2be"} Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.694277 4730 scope.go:117] "RemoveContainer" containerID="753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.694340 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz6hs" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.700578 4730 generic.go:334] "Generic (PLEG): container finished" podID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerID="968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b" exitCode=0 Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.700656 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerDied","Data":"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b"} Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.746826 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.747323 4730 scope.go:117] "RemoveContainer" containerID="3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.753343 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz6hs"] Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.770087 4730 scope.go:117] "RemoveContainer" containerID="c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.796994 4730 scope.go:117] "RemoveContainer" containerID="753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a" Feb 02 07:41:50 crc kubenswrapper[4730]: E0202 07:41:50.797628 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a\": container with ID starting with 753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a not found: ID does not exist" containerID="753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.797660 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a"} err="failed to get container status \"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a\": rpc error: code = NotFound desc = could not find container \"753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a\": container with ID starting with 753f2b5739eedbf4fae35f8e3af16335f28cc439fdfa107ab071996d5dfc5b0a not found: ID does not exist" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.797685 4730 scope.go:117] "RemoveContainer" containerID="3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad" Feb 02 07:41:50 crc kubenswrapper[4730]: E0202 07:41:50.798097 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad\": container with ID starting with 3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad not found: ID does not exist" containerID="3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.798130 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad"} err="failed to get container status \"3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad\": rpc error: code = NotFound desc = could not find container \"3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad\": container with ID starting with 3afd5ab580298c885ea62fc18a42ef50ebc8833e951eaa41ee9dd93270c082ad not found: ID does not exist" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.798149 4730 scope.go:117] "RemoveContainer" containerID="c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f" Feb 02 07:41:50 crc kubenswrapper[4730]: E0202 07:41:50.798752 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f\": container with ID starting with c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f not found: ID does not exist" containerID="c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f" Feb 02 07:41:50 crc kubenswrapper[4730]: I0202 07:41:50.798794 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f"} err="failed to get container status \"c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f\": rpc error: code = NotFound desc = could not find container \"c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f\": container with ID starting with c7346818ca214be8c45341d7c9e9f4181b9d4b63d27704d103db44740d1e188f not found: ID does not exist" Feb 02 07:41:51 crc kubenswrapper[4730]: I0202 07:41:51.261301 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" path="/var/lib/kubelet/pods/d4a71982-b99f-48e8-b1ea-e94353608314/volumes" Feb 02 07:41:51 crc kubenswrapper[4730]: I0202 07:41:51.707046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerStarted","Data":"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7"} Feb 02 07:41:51 crc kubenswrapper[4730]: I0202 07:41:51.859471 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8st5" Feb 02 07:41:51 crc kubenswrapper[4730]: I0202 07:41:51.876396 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n89wl" podStartSLOduration=2.446995886 podStartE2EDuration="5.876375049s" podCreationTimestamp="2026-02-02 07:41:46 +0000 UTC" firstStartedPulling="2026-02-02 07:41:47.666503956 +0000 UTC m=+881.087707304" lastFinishedPulling="2026-02-02 07:41:51.095883109 +0000 UTC m=+884.517086467" observedRunningTime="2026-02-02 07:41:51.730096774 +0000 UTC m=+885.151300122" watchObservedRunningTime="2026-02-02 07:41:51.876375049 +0000 UTC m=+885.297578397" Feb 02 07:41:55 crc kubenswrapper[4730]: I0202 07:41:55.121385 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8st5"] Feb 02 07:41:55 crc kubenswrapper[4730]: I0202 07:41:55.473103 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:41:55 crc kubenswrapper[4730]: I0202 07:41:55.473332 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mdtqb" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="registry-server" containerID="cri-o://9a575a1f467c09d94610a606fb8ef4456d6452919edfe1a4413f2822430f47c7" gracePeriod=2 Feb 02 07:41:56 crc kubenswrapper[4730]: I0202 07:41:56.437522 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:56 crc kubenswrapper[4730]: I0202 07:41:56.438053 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:41:56 crc kubenswrapper[4730]: I0202 07:41:56.743554 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f71e670-eb96-4321-af75-8ef24727cb13" containerID="9a575a1f467c09d94610a606fb8ef4456d6452919edfe1a4413f2822430f47c7" exitCode=0 Feb 02 07:41:56 crc kubenswrapper[4730]: I0202 07:41:56.743614 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerDied","Data":"9a575a1f467c09d94610a606fb8ef4456d6452919edfe1a4413f2822430f47c7"} Feb 02 07:41:56 crc kubenswrapper[4730]: I0202 07:41:56.999805 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.084338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities\") pod \"4f71e670-eb96-4321-af75-8ef24727cb13\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.084476 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbj8\" (UniqueName: \"kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8\") pod \"4f71e670-eb96-4321-af75-8ef24727cb13\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.084571 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content\") pod \"4f71e670-eb96-4321-af75-8ef24727cb13\" (UID: \"4f71e670-eb96-4321-af75-8ef24727cb13\") " Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.085279 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities" (OuterVolumeSpecName: "utilities") pod "4f71e670-eb96-4321-af75-8ef24727cb13" (UID: "4f71e670-eb96-4321-af75-8ef24727cb13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.089687 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8" (OuterVolumeSpecName: "kube-api-access-8gbj8") pod "4f71e670-eb96-4321-af75-8ef24727cb13" (UID: "4f71e670-eb96-4321-af75-8ef24727cb13"). InnerVolumeSpecName "kube-api-access-8gbj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.144996 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f71e670-eb96-4321-af75-8ef24727cb13" (UID: "4f71e670-eb96-4321-af75-8ef24727cb13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.186226 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gbj8\" (UniqueName: \"kubernetes.io/projected/4f71e670-eb96-4321-af75-8ef24727cb13-kube-api-access-8gbj8\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.186261 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.186271 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f71e670-eb96-4321-af75-8ef24727cb13-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.514010 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n89wl" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="registry-server" probeResult="failure" output=< Feb 02 07:41:57 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 02 07:41:57 crc kubenswrapper[4730]: > Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.750789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mdtqb" event={"ID":"4f71e670-eb96-4321-af75-8ef24727cb13","Type":"ContainerDied","Data":"2095e14570c704a22af1c32a9b711c7580dd116cde2fc797b2317183676a36b3"} Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.750847 4730 scope.go:117] "RemoveContainer" containerID="9a575a1f467c09d94610a606fb8ef4456d6452919edfe1a4413f2822430f47c7" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.750886 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mdtqb" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.778243 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.779525 4730 scope.go:117] "RemoveContainer" containerID="c32334867ffd8fce441e091cc52712c123899d159d6813a9854d74c1ebf274aa" Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.786072 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mdtqb"] Feb 02 07:41:57 crc kubenswrapper[4730]: I0202 07:41:57.798684 4730 scope.go:117] "RemoveContainer" containerID="5b7d5860ec34f0b11a5dd01a28079715b8086e1bb7f7653abe0bfb70c4ed7496" Feb 02 07:41:59 crc kubenswrapper[4730]: I0202 07:41:59.259135 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" path="/var/lib/kubelet/pods/4f71e670-eb96-4321-af75-8ef24727cb13/volumes" Feb 02 07:42:06 crc kubenswrapper[4730]: I0202 07:42:06.503741 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:42:06 crc kubenswrapper[4730]: I0202 07:42:06.588967 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:42:07 crc kubenswrapper[4730]: I0202 07:42:07.751216 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:42:07 crc kubenswrapper[4730]: I0202 07:42:07.822136 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n89wl" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="registry-server" containerID="cri-o://e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7" gracePeriod=2 Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.222760 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.336328 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content\") pod \"983c3685-ff24-415e-b5f2-3e3c5d395acb\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.336415 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8bzf\" (UniqueName: \"kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf\") pod \"983c3685-ff24-415e-b5f2-3e3c5d395acb\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.336538 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities\") pod \"983c3685-ff24-415e-b5f2-3e3c5d395acb\" (UID: \"983c3685-ff24-415e-b5f2-3e3c5d395acb\") " Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.338422 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities" (OuterVolumeSpecName: "utilities") pod "983c3685-ff24-415e-b5f2-3e3c5d395acb" (UID: "983c3685-ff24-415e-b5f2-3e3c5d395acb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.343776 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf" (OuterVolumeSpecName: "kube-api-access-w8bzf") pod "983c3685-ff24-415e-b5f2-3e3c5d395acb" (UID: "983c3685-ff24-415e-b5f2-3e3c5d395acb"). InnerVolumeSpecName "kube-api-access-w8bzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.438664 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8bzf\" (UniqueName: \"kubernetes.io/projected/983c3685-ff24-415e-b5f2-3e3c5d395acb-kube-api-access-w8bzf\") on node \"crc\" DevicePath \"\"" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.438695 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.478593 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "983c3685-ff24-415e-b5f2-3e3c5d395acb" (UID: "983c3685-ff24-415e-b5f2-3e3c5d395acb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.539415 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983c3685-ff24-415e-b5f2-3e3c5d395acb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.832332 4730 generic.go:334] "Generic (PLEG): container finished" podID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerID="e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7" exitCode=0 Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.832389 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerDied","Data":"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7"} Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.832437 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n89wl" event={"ID":"983c3685-ff24-415e-b5f2-3e3c5d395acb","Type":"ContainerDied","Data":"81e6c7ee1d75d46a3f8f059ff3bb112a7ca51080c95d86996fd843d61ba9280b"} Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.832473 4730 scope.go:117] "RemoveContainer" containerID="e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.832527 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n89wl" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.860262 4730 scope.go:117] "RemoveContainer" containerID="968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.893521 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.894338 4730 scope.go:117] "RemoveContainer" containerID="7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.908106 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n89wl"] Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.932353 4730 scope.go:117] "RemoveContainer" containerID="e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7" Feb 02 07:42:08 crc kubenswrapper[4730]: E0202 07:42:08.933068 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7\": container with ID starting with e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7 not found: ID does not exist" containerID="e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.933122 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7"} err="failed to get container status \"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7\": rpc error: code = NotFound desc = could not find container \"e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7\": container with ID starting with e6ada85fe7572df7121dbc292152aa986df4a3877aa5a9b2aaa41926990d6cf7 not found: ID does not exist" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.933156 4730 scope.go:117] "RemoveContainer" containerID="968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b" Feb 02 07:42:08 crc kubenswrapper[4730]: E0202 07:42:08.933746 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b\": container with ID starting with 968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b not found: ID does not exist" containerID="968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.933838 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b"} err="failed to get container status \"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b\": rpc error: code = NotFound desc = could not find container \"968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b\": container with ID starting with 968db7e481ad29f8546e539f9ea1810477553b98a3c1e0580d2e537a85fdbf4b not found: ID does not exist" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.934011 4730 scope.go:117] "RemoveContainer" containerID="7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea" Feb 02 07:42:08 crc kubenswrapper[4730]: E0202 07:42:08.934527 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea\": container with ID starting with 7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea not found: ID does not exist" containerID="7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea" Feb 02 07:42:08 crc kubenswrapper[4730]: I0202 07:42:08.934604 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea"} err="failed to get container status \"7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea\": rpc error: code = NotFound desc = could not find container \"7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea\": container with ID starting with 7f31c894ee8ba800cb0eca0d005d478c53adea705fcf9f96698df11c3f3897ea not found: ID does not exist" Feb 02 07:42:09 crc kubenswrapper[4730]: I0202 07:42:09.264511 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" path="/var/lib/kubelet/pods/983c3685-ff24-415e-b5f2-3e3c5d395acb/volumes" Feb 02 07:42:27 crc kubenswrapper[4730]: I0202 07:42:27.660116 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:42:27 crc kubenswrapper[4730]: I0202 07:42:27.660776 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:42:41 crc kubenswrapper[4730]: I0202 07:42:41.044997 4730 generic.go:334] "Generic (PLEG): container finished" podID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerID="4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9" exitCode=0 Feb 02 07:42:41 crc kubenswrapper[4730]: I0202 07:42:41.045108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7bb6/must-gather-qw75m" event={"ID":"814e361d-adf2-4eab-8544-3cb2d2b3e885","Type":"ContainerDied","Data":"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9"} Feb 02 07:42:41 crc kubenswrapper[4730]: I0202 07:42:41.046349 4730 scope.go:117] "RemoveContainer" containerID="4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9" Feb 02 07:42:42 crc kubenswrapper[4730]: I0202 07:42:42.022558 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7bb6_must-gather-qw75m_814e361d-adf2-4eab-8544-3cb2d2b3e885/gather/0.log" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.037264 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v7bb6/must-gather-qw75m"] Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.038344 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v7bb6/must-gather-qw75m" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="copy" containerID="cri-o://1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140" gracePeriod=2 Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.080826 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v7bb6/must-gather-qw75m"] Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.400590 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7bb6_must-gather-qw75m_814e361d-adf2-4eab-8544-3cb2d2b3e885/copy/0.log" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.401418 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.540020 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n77cw\" (UniqueName: \"kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw\") pod \"814e361d-adf2-4eab-8544-3cb2d2b3e885\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.540307 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output\") pod \"814e361d-adf2-4eab-8544-3cb2d2b3e885\" (UID: \"814e361d-adf2-4eab-8544-3cb2d2b3e885\") " Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.546393 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw" (OuterVolumeSpecName: "kube-api-access-n77cw") pod "814e361d-adf2-4eab-8544-3cb2d2b3e885" (UID: "814e361d-adf2-4eab-8544-3cb2d2b3e885"). InnerVolumeSpecName "kube-api-access-n77cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.584572 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "814e361d-adf2-4eab-8544-3cb2d2b3e885" (UID: "814e361d-adf2-4eab-8544-3cb2d2b3e885"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.642131 4730 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/814e361d-adf2-4eab-8544-3cb2d2b3e885-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 07:42:49 crc kubenswrapper[4730]: I0202 07:42:49.642238 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n77cw\" (UniqueName: \"kubernetes.io/projected/814e361d-adf2-4eab-8544-3cb2d2b3e885-kube-api-access-n77cw\") on node \"crc\" DevicePath \"\"" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.106514 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7bb6_must-gather-qw75m_814e361d-adf2-4eab-8544-3cb2d2b3e885/copy/0.log" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.106938 4730 generic.go:334] "Generic (PLEG): container finished" podID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerID="1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140" exitCode=143 Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.106980 4730 scope.go:117] "RemoveContainer" containerID="1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.107026 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7bb6/must-gather-qw75m" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.135958 4730 scope.go:117] "RemoveContainer" containerID="4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.175945 4730 scope.go:117] "RemoveContainer" containerID="1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140" Feb 02 07:42:50 crc kubenswrapper[4730]: E0202 07:42:50.176353 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140\": container with ID starting with 1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140 not found: ID does not exist" containerID="1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.176382 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140"} err="failed to get container status \"1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140\": rpc error: code = NotFound desc = could not find container \"1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140\": container with ID starting with 1ff8e58e220201efaf47bd8322c0cd9b38b53b631dc53d59327cdd3c48676140 not found: ID does not exist" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.176403 4730 scope.go:117] "RemoveContainer" containerID="4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9" Feb 02 07:42:50 crc kubenswrapper[4730]: E0202 07:42:50.176674 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9\": container with ID starting with 4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9 not found: ID does not exist" containerID="4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9" Feb 02 07:42:50 crc kubenswrapper[4730]: I0202 07:42:50.176722 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9"} err="failed to get container status \"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9\": rpc error: code = NotFound desc = could not find container \"4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9\": container with ID starting with 4c97a1969605c9bf4bf0efc9d8dfa4d2f4d8cbc22fc49bb9b61ab19523bfd4b9 not found: ID does not exist" Feb 02 07:42:51 crc kubenswrapper[4730]: I0202 07:42:51.262959 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" path="/var/lib/kubelet/pods/814e361d-adf2-4eab-8544-3cb2d2b3e885/volumes" Feb 02 07:42:57 crc kubenswrapper[4730]: I0202 07:42:57.660135 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:42:57 crc kubenswrapper[4730]: I0202 07:42:57.662416 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:43:27 crc kubenswrapper[4730]: I0202 07:43:27.660687 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:43:27 crc kubenswrapper[4730]: I0202 07:43:27.661525 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:43:27 crc kubenswrapper[4730]: I0202 07:43:27.661598 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:43:27 crc kubenswrapper[4730]: I0202 07:43:27.662456 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:43:27 crc kubenswrapper[4730]: I0202 07:43:27.662559 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55" gracePeriod=600 Feb 02 07:43:28 crc kubenswrapper[4730]: I0202 07:43:28.413346 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55" exitCode=0 Feb 02 07:43:28 crc kubenswrapper[4730]: I0202 07:43:28.413396 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55"} Feb 02 07:43:28 crc kubenswrapper[4730]: I0202 07:43:28.413727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"6e04abba3605459bd32c2ae818606756c1795c2696313b65816f0d598679b641"} Feb 02 07:43:28 crc kubenswrapper[4730]: I0202 07:43:28.413749 4730 scope.go:117] "RemoveContainer" containerID="55eac5b141933721b9ad3aae038fd186f17ff4ec4c4e26d6f58c125af5debbac" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.203650 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h"] Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.204847 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.204871 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.204895 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="copy" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.204909 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="copy" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.204925 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="gather" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.204936 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="gather" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.204953 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.204963 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.204975 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.204986 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205000 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205011 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205028 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205038 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205054 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205063 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205093 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205103 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205116 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205125 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: E0202 07:45:00.205138 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205146 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205324 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f71e670-eb96-4321-af75-8ef24727cb13" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205339 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="copy" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205355 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="983c3685-ff24-415e-b5f2-3e3c5d395acb" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205367 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a71982-b99f-48e8-b1ea-e94353608314" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205381 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="814e361d-adf2-4eab-8544-3cb2d2b3e885" containerName="gather" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.205894 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.208921 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.209281 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.217806 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h"] Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.385537 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.385629 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkcrc\" (UniqueName: \"kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.385662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.487394 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.487562 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkcrc\" (UniqueName: \"kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.487613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.489247 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.498770 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.517696 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkcrc\" (UniqueName: \"kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc\") pod \"collect-profiles-29500305-pzk4h\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.527435 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:00 crc kubenswrapper[4730]: I0202 07:45:00.762249 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h"] Feb 02 07:45:00 crc kubenswrapper[4730]: W0202 07:45:00.770951 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a98d0a3_2fb8_4acf_80f2_46ba14a8c995.slice/crio-3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560 WatchSource:0}: Error finding container 3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560: Status 404 returned error can't find the container with id 3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560 Feb 02 07:45:01 crc kubenswrapper[4730]: I0202 07:45:01.076937 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" event={"ID":"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995","Type":"ContainerStarted","Data":"5fa498fcb2d76cc3846b2cea44a56aac1925d389bed8d5a68673259f5236aee9"} Feb 02 07:45:01 crc kubenswrapper[4730]: I0202 07:45:01.077461 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" event={"ID":"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995","Type":"ContainerStarted","Data":"3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560"} Feb 02 07:45:01 crc kubenswrapper[4730]: I0202 07:45:01.104335 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" podStartSLOduration=1.10430561 podStartE2EDuration="1.10430561s" podCreationTimestamp="2026-02-02 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:45:01.100819948 +0000 UTC m=+1074.522023356" watchObservedRunningTime="2026-02-02 07:45:01.10430561 +0000 UTC m=+1074.525508998" Feb 02 07:45:02 crc kubenswrapper[4730]: I0202 07:45:02.085003 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" event={"ID":"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995","Type":"ContainerDied","Data":"5fa498fcb2d76cc3846b2cea44a56aac1925d389bed8d5a68673259f5236aee9"} Feb 02 07:45:02 crc kubenswrapper[4730]: I0202 07:45:02.084548 4730 generic.go:334] "Generic (PLEG): container finished" podID="9a98d0a3-2fb8-4acf-80f2-46ba14a8c995" containerID="5fa498fcb2d76cc3846b2cea44a56aac1925d389bed8d5a68673259f5236aee9" exitCode=0 Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.420669 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.527221 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume\") pod \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.527629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume\") pod \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.527780 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkcrc\" (UniqueName: \"kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc\") pod \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\" (UID: \"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995\") " Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.528278 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995" (UID: "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.529339 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.534326 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995" (UID: "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.534616 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc" (OuterVolumeSpecName: "kube-api-access-zkcrc") pod "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995" (UID: "9a98d0a3-2fb8-4acf-80f2-46ba14a8c995"). InnerVolumeSpecName "kube-api-access-zkcrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.630469 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4730]: I0202 07:45:03.630532 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkcrc\" (UniqueName: \"kubernetes.io/projected/9a98d0a3-2fb8-4acf-80f2-46ba14a8c995-kube-api-access-zkcrc\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:04 crc kubenswrapper[4730]: I0202 07:45:04.106984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" event={"ID":"9a98d0a3-2fb8-4acf-80f2-46ba14a8c995","Type":"ContainerDied","Data":"3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560"} Feb 02 07:45:04 crc kubenswrapper[4730]: I0202 07:45:04.107542 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa9cf0a0c059001ef6575d16eccd2cc5e2dde6db9186e658f037ba46bc87560" Feb 02 07:45:04 crc kubenswrapper[4730]: I0202 07:45:04.107048 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-pzk4h" Feb 02 07:45:27 crc kubenswrapper[4730]: I0202 07:45:27.660039 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:45:27 crc kubenswrapper[4730]: I0202 07:45:27.660767 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:45:57 crc kubenswrapper[4730]: I0202 07:45:57.660646 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:45:57 crc kubenswrapper[4730]: I0202 07:45:57.661697 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:46:27 crc kubenswrapper[4730]: I0202 07:46:27.660300 4730 patch_prober.go:28] interesting pod/machine-config-daemon-ghs2t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:46:27 crc kubenswrapper[4730]: I0202 07:46:27.660877 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:46:27 crc kubenswrapper[4730]: I0202 07:46:27.660942 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" Feb 02 07:46:27 crc kubenswrapper[4730]: I0202 07:46:27.661740 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e04abba3605459bd32c2ae818606756c1795c2696313b65816f0d598679b641"} pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:46:27 crc kubenswrapper[4730]: I0202 07:46:27.661819 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" podUID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerName="machine-config-daemon" containerID="cri-o://6e04abba3605459bd32c2ae818606756c1795c2696313b65816f0d598679b641" gracePeriod=600 Feb 02 07:46:28 crc kubenswrapper[4730]: I0202 07:46:28.678151 4730 generic.go:334] "Generic (PLEG): container finished" podID="61cde55f-e8c2-493e-82b6-a3b4a839366b" containerID="6e04abba3605459bd32c2ae818606756c1795c2696313b65816f0d598679b641" exitCode=0 Feb 02 07:46:28 crc kubenswrapper[4730]: I0202 07:46:28.678219 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerDied","Data":"6e04abba3605459bd32c2ae818606756c1795c2696313b65816f0d598679b641"} Feb 02 07:46:28 crc kubenswrapper[4730]: I0202 07:46:28.678721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ghs2t" event={"ID":"61cde55f-e8c2-493e-82b6-a3b4a839366b","Type":"ContainerStarted","Data":"f5971c2b63b828ea5090a388f749e0d2c4fbb374a4f95873b9c2b6942efb3f26"} Feb 02 07:46:28 crc kubenswrapper[4730]: I0202 07:46:28.678743 4730 scope.go:117] "RemoveContainer" containerID="2a43717ad3d7717ab86054d33ca1e44b954f9959306be1515fa3cef3d728da55" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140053147024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140053147017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140050410016473 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140050410015443 5ustar corecore